Feb  1 09:16:57 np0005604375 kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Feb  1 09:16:57 np0005604375 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb  1 09:16:57 np0005604375 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb  1 09:16:57 np0005604375 kernel: BIOS-provided physical RAM map:
Feb  1 09:16:57 np0005604375 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb  1 09:16:57 np0005604375 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb  1 09:16:57 np0005604375 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb  1 09:16:57 np0005604375 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb  1 09:16:57 np0005604375 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb  1 09:16:57 np0005604375 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb  1 09:16:57 np0005604375 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb  1 09:16:57 np0005604375 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb  1 09:16:57 np0005604375 kernel: NX (Execute Disable) protection: active
Feb  1 09:16:57 np0005604375 kernel: APIC: Static calls initialized
Feb  1 09:16:57 np0005604375 kernel: SMBIOS 2.8 present.
Feb  1 09:16:57 np0005604375 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb  1 09:16:57 np0005604375 kernel: Hypervisor detected: KVM
Feb  1 09:16:57 np0005604375 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb  1 09:16:57 np0005604375 kernel: kvm-clock: using sched offset of 5624425070 cycles
Feb  1 09:16:57 np0005604375 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb  1 09:16:57 np0005604375 kernel: tsc: Detected 2800.000 MHz processor
Feb  1 09:16:58 np0005604375 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb  1 09:16:58 np0005604375 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb  1 09:16:58 np0005604375 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb  1 09:16:58 np0005604375 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb  1 09:16:58 np0005604375 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb  1 09:16:58 np0005604375 kernel: Using GB pages for direct mapping
Feb  1 09:16:58 np0005604375 kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Feb  1 09:16:58 np0005604375 kernel: ACPI: Early table checksum verification disabled
Feb  1 09:16:58 np0005604375 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb  1 09:16:58 np0005604375 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  1 09:16:58 np0005604375 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  1 09:16:58 np0005604375 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  1 09:16:58 np0005604375 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb  1 09:16:58 np0005604375 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  1 09:16:58 np0005604375 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  1 09:16:58 np0005604375 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb  1 09:16:58 np0005604375 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb  1 09:16:58 np0005604375 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb  1 09:16:58 np0005604375 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb  1 09:16:58 np0005604375 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb  1 09:16:58 np0005604375 kernel: No NUMA configuration found
Feb  1 09:16:58 np0005604375 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb  1 09:16:58 np0005604375 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Feb  1 09:16:58 np0005604375 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb  1 09:16:58 np0005604375 kernel: Zone ranges:
Feb  1 09:16:58 np0005604375 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb  1 09:16:58 np0005604375 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb  1 09:16:58 np0005604375 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb  1 09:16:58 np0005604375 kernel:  Device   empty
Feb  1 09:16:58 np0005604375 kernel: Movable zone start for each node
Feb  1 09:16:58 np0005604375 kernel: Early memory node ranges
Feb  1 09:16:58 np0005604375 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb  1 09:16:58 np0005604375 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb  1 09:16:58 np0005604375 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb  1 09:16:58 np0005604375 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb  1 09:16:58 np0005604375 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb  1 09:16:58 np0005604375 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb  1 09:16:58 np0005604375 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb  1 09:16:58 np0005604375 kernel: ACPI: PM-Timer IO Port: 0x608
Feb  1 09:16:58 np0005604375 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb  1 09:16:58 np0005604375 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb  1 09:16:58 np0005604375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb  1 09:16:58 np0005604375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb  1 09:16:58 np0005604375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb  1 09:16:58 np0005604375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb  1 09:16:58 np0005604375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb  1 09:16:58 np0005604375 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb  1 09:16:58 np0005604375 kernel: TSC deadline timer available
Feb  1 09:16:58 np0005604375 kernel: CPU topo: Max. logical packages:   8
Feb  1 09:16:58 np0005604375 kernel: CPU topo: Max. logical dies:       8
Feb  1 09:16:58 np0005604375 kernel: CPU topo: Max. dies per package:   1
Feb  1 09:16:58 np0005604375 kernel: CPU topo: Max. threads per core:   1
Feb  1 09:16:58 np0005604375 kernel: CPU topo: Num. cores per package:     1
Feb  1 09:16:58 np0005604375 kernel: CPU topo: Num. threads per package:   1
Feb  1 09:16:58 np0005604375 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb  1 09:16:58 np0005604375 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb  1 09:16:58 np0005604375 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb  1 09:16:58 np0005604375 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb  1 09:16:58 np0005604375 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb  1 09:16:58 np0005604375 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb  1 09:16:58 np0005604375 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb  1 09:16:58 np0005604375 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb  1 09:16:58 np0005604375 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb  1 09:16:58 np0005604375 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb  1 09:16:58 np0005604375 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb  1 09:16:58 np0005604375 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb  1 09:16:58 np0005604375 kernel: Booting paravirtualized kernel on KVM
Feb  1 09:16:58 np0005604375 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb  1 09:16:58 np0005604375 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb  1 09:16:58 np0005604375 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb  1 09:16:58 np0005604375 kernel: kvm-guest: PV spinlocks disabled, no host support
Feb  1 09:16:58 np0005604375 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb  1 09:16:58 np0005604375 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Feb  1 09:16:58 np0005604375 kernel: random: crng init done
Feb  1 09:16:58 np0005604375 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb  1 09:16:58 np0005604375 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb  1 09:16:58 np0005604375 kernel: Fallback order for Node 0: 0 
Feb  1 09:16:58 np0005604375 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb  1 09:16:58 np0005604375 kernel: Policy zone: Normal
Feb  1 09:16:58 np0005604375 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb  1 09:16:58 np0005604375 kernel: software IO TLB: area num 8.
Feb  1 09:16:58 np0005604375 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb  1 09:16:58 np0005604375 kernel: ftrace: allocating 49438 entries in 194 pages
Feb  1 09:16:58 np0005604375 kernel: ftrace: allocated 194 pages with 3 groups
Feb  1 09:16:58 np0005604375 kernel: Dynamic Preempt: voluntary
Feb  1 09:16:58 np0005604375 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb  1 09:16:58 np0005604375 kernel: rcu: #011RCU event tracing is enabled.
Feb  1 09:16:58 np0005604375 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb  1 09:16:58 np0005604375 kernel: #011Trampoline variant of Tasks RCU enabled.
Feb  1 09:16:58 np0005604375 kernel: #011Rude variant of Tasks RCU enabled.
Feb  1 09:16:58 np0005604375 kernel: #011Tracing variant of Tasks RCU enabled.
Feb  1 09:16:58 np0005604375 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb  1 09:16:58 np0005604375 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb  1 09:16:58 np0005604375 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb  1 09:16:58 np0005604375 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb  1 09:16:58 np0005604375 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb  1 09:16:58 np0005604375 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb  1 09:16:58 np0005604375 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb  1 09:16:58 np0005604375 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb  1 09:16:58 np0005604375 kernel: Console: colour VGA+ 80x25
Feb  1 09:16:58 np0005604375 kernel: printk: console [ttyS0] enabled
Feb  1 09:16:58 np0005604375 kernel: ACPI: Core revision 20230331
Feb  1 09:16:58 np0005604375 kernel: APIC: Switch to symmetric I/O mode setup
Feb  1 09:16:58 np0005604375 kernel: x2apic enabled
Feb  1 09:16:58 np0005604375 kernel: APIC: Switched APIC routing to: physical x2apic
Feb  1 09:16:58 np0005604375 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb  1 09:16:58 np0005604375 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Feb  1 09:16:58 np0005604375 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb  1 09:16:58 np0005604375 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb  1 09:16:58 np0005604375 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb  1 09:16:58 np0005604375 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb  1 09:16:58 np0005604375 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb  1 09:16:58 np0005604375 kernel: Spectre V2 : Mitigation: Retpolines
Feb  1 09:16:58 np0005604375 kernel: RETBleed: Mitigation: untrained return thunk
Feb  1 09:16:58 np0005604375 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb  1 09:16:58 np0005604375 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb  1 09:16:58 np0005604375 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb  1 09:16:58 np0005604375 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb  1 09:16:58 np0005604375 kernel: active return thunk: retbleed_return_thunk
Feb  1 09:16:58 np0005604375 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb  1 09:16:58 np0005604375 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb  1 09:16:58 np0005604375 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb  1 09:16:58 np0005604375 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb  1 09:16:58 np0005604375 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb  1 09:16:58 np0005604375 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb  1 09:16:58 np0005604375 kernel: Freeing SMP alternatives memory: 40K
Feb  1 09:16:58 np0005604375 kernel: pid_max: default: 32768 minimum: 301
Feb  1 09:16:58 np0005604375 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb  1 09:16:58 np0005604375 kernel: landlock: Up and running.
Feb  1 09:16:58 np0005604375 kernel: Yama: becoming mindful.
Feb  1 09:16:58 np0005604375 kernel: SELinux:  Initializing.
Feb  1 09:16:58 np0005604375 kernel: LSM support for eBPF active
Feb  1 09:16:58 np0005604375 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb  1 09:16:58 np0005604375 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb  1 09:16:58 np0005604375 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb  1 09:16:58 np0005604375 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb  1 09:16:58 np0005604375 kernel: ... version:                0
Feb  1 09:16:58 np0005604375 kernel: ... bit width:              48
Feb  1 09:16:58 np0005604375 kernel: ... generic registers:      6
Feb  1 09:16:58 np0005604375 kernel: ... value mask:             0000ffffffffffff
Feb  1 09:16:58 np0005604375 kernel: ... max period:             00007fffffffffff
Feb  1 09:16:58 np0005604375 kernel: ... fixed-purpose events:   0
Feb  1 09:16:58 np0005604375 kernel: ... event mask:             000000000000003f
Feb  1 09:16:58 np0005604375 kernel: signal: max sigframe size: 1776
Feb  1 09:16:58 np0005604375 kernel: rcu: Hierarchical SRCU implementation.
Feb  1 09:16:58 np0005604375 kernel: rcu: #011Max phase no-delay instances is 400.
Feb  1 09:16:58 np0005604375 kernel: smp: Bringing up secondary CPUs ...
Feb  1 09:16:58 np0005604375 kernel: smpboot: x86: Booting SMP configuration:
Feb  1 09:16:58 np0005604375 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb  1 09:16:58 np0005604375 kernel: smp: Brought up 1 node, 8 CPUs
Feb  1 09:16:58 np0005604375 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Feb  1 09:16:58 np0005604375 kernel: node 0 deferred pages initialised in 10ms
Feb  1 09:16:58 np0005604375 kernel: Memory: 7763776K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618400K reserved, 0K cma-reserved)
Feb  1 09:16:58 np0005604375 kernel: devtmpfs: initialized
Feb  1 09:16:58 np0005604375 kernel: x86/mm: Memory block size: 128MB
Feb  1 09:16:58 np0005604375 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb  1 09:16:58 np0005604375 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb  1 09:16:58 np0005604375 kernel: pinctrl core: initialized pinctrl subsystem
Feb  1 09:16:58 np0005604375 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb  1 09:16:58 np0005604375 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb  1 09:16:58 np0005604375 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb  1 09:16:58 np0005604375 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb  1 09:16:58 np0005604375 kernel: audit: initializing netlink subsys (disabled)
Feb  1 09:16:58 np0005604375 kernel: audit: type=2000 audit(1769955417.487:1): state=initialized audit_enabled=0 res=1
Feb  1 09:16:58 np0005604375 kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb  1 09:16:58 np0005604375 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb  1 09:16:58 np0005604375 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb  1 09:16:58 np0005604375 kernel: cpuidle: using governor menu
Feb  1 09:16:58 np0005604375 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb  1 09:16:58 np0005604375 kernel: PCI: Using configuration type 1 for base access
Feb  1 09:16:58 np0005604375 kernel: PCI: Using configuration type 1 for extended access
Feb  1 09:16:58 np0005604375 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb  1 09:16:58 np0005604375 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb  1 09:16:58 np0005604375 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb  1 09:16:58 np0005604375 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb  1 09:16:58 np0005604375 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb  1 09:16:58 np0005604375 kernel: Demotion targets for Node 0: null
Feb  1 09:16:58 np0005604375 kernel: cryptd: max_cpu_qlen set to 1000
Feb  1 09:16:58 np0005604375 kernel: ACPI: Added _OSI(Module Device)
Feb  1 09:16:58 np0005604375 kernel: ACPI: Added _OSI(Processor Device)
Feb  1 09:16:58 np0005604375 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb  1 09:16:58 np0005604375 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb  1 09:16:58 np0005604375 kernel: ACPI: Interpreter enabled
Feb  1 09:16:58 np0005604375 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb  1 09:16:58 np0005604375 kernel: ACPI: Using IOAPIC for interrupt routing
Feb  1 09:16:58 np0005604375 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb  1 09:16:58 np0005604375 kernel: PCI: Using E820 reservations for host bridge windows
Feb  1 09:16:58 np0005604375 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb  1 09:16:58 np0005604375 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb  1 09:16:58 np0005604375 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [3] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [4] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [5] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [6] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [7] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [8] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [9] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [10] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [11] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [12] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [13] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [14] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [15] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [16] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [17] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [18] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [19] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [20] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [21] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [22] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [23] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [24] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [25] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [26] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [27] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [28] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [29] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [30] registered
Feb  1 09:16:58 np0005604375 kernel: acpiphp: Slot [31] registered
Feb  1 09:16:58 np0005604375 kernel: PCI host bridge to bus 0000:00
Feb  1 09:16:58 np0005604375 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb  1 09:16:58 np0005604375 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb  1 09:16:58 np0005604375 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb  1 09:16:58 np0005604375 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb  1 09:16:58 np0005604375 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb  1 09:16:58 np0005604375 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb  1 09:16:58 np0005604375 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb  1 09:16:58 np0005604375 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb  1 09:16:58 np0005604375 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb  1 09:16:58 np0005604375 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb  1 09:16:58 np0005604375 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb  1 09:16:58 np0005604375 kernel: iommu: Default domain type: Translated
Feb  1 09:16:58 np0005604375 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb  1 09:16:58 np0005604375 kernel: SCSI subsystem initialized
Feb  1 09:16:58 np0005604375 kernel: ACPI: bus type USB registered
Feb  1 09:16:58 np0005604375 kernel: usbcore: registered new interface driver usbfs
Feb  1 09:16:58 np0005604375 kernel: usbcore: registered new interface driver hub
Feb  1 09:16:58 np0005604375 kernel: usbcore: registered new device driver usb
Feb  1 09:16:58 np0005604375 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb  1 09:16:58 np0005604375 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb  1 09:16:58 np0005604375 kernel: PTP clock support registered
Feb  1 09:16:58 np0005604375 kernel: EDAC MC: Ver: 3.0.0
Feb  1 09:16:58 np0005604375 kernel: NetLabel: Initializing
Feb  1 09:16:58 np0005604375 kernel: NetLabel:  domain hash size = 128
Feb  1 09:16:58 np0005604375 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb  1 09:16:58 np0005604375 kernel: NetLabel:  unlabeled traffic allowed by default
Feb  1 09:16:58 np0005604375 kernel: PCI: Using ACPI for IRQ routing
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb  1 09:16:58 np0005604375 kernel: vgaarb: loaded
Feb  1 09:16:58 np0005604375 kernel: clocksource: Switched to clocksource kvm-clock
Feb  1 09:16:58 np0005604375 kernel: VFS: Disk quotas dquot_6.6.0
Feb  1 09:16:58 np0005604375 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb  1 09:16:58 np0005604375 kernel: pnp: PnP ACPI init
Feb  1 09:16:58 np0005604375 kernel: pnp: PnP ACPI: found 5 devices
Feb  1 09:16:58 np0005604375 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb  1 09:16:58 np0005604375 kernel: NET: Registered PF_INET protocol family
Feb  1 09:16:58 np0005604375 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb  1 09:16:58 np0005604375 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb  1 09:16:58 np0005604375 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb  1 09:16:58 np0005604375 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb  1 09:16:58 np0005604375 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb  1 09:16:58 np0005604375 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb  1 09:16:58 np0005604375 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb  1 09:16:58 np0005604375 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb  1 09:16:58 np0005604375 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb  1 09:16:58 np0005604375 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb  1 09:16:58 np0005604375 kernel: NET: Registered PF_XDP protocol family
Feb  1 09:16:58 np0005604375 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb  1 09:16:58 np0005604375 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb  1 09:16:58 np0005604375 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb  1 09:16:58 np0005604375 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb  1 09:16:58 np0005604375 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb  1 09:16:58 np0005604375 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb  1 09:16:58 np0005604375 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 22652 usecs
Feb  1 09:16:58 np0005604375 kernel: PCI: CLS 0 bytes, default 64
Feb  1 09:16:58 np0005604375 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb  1 09:16:58 np0005604375 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb  1 09:16:58 np0005604375 kernel: ACPI: bus type thunderbolt registered
Feb  1 09:16:58 np0005604375 kernel: Trying to unpack rootfs image as initramfs...
Feb  1 09:16:58 np0005604375 kernel: Initialise system trusted keyrings
Feb  1 09:16:58 np0005604375 kernel: Key type blacklist registered
Feb  1 09:16:58 np0005604375 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb  1 09:16:58 np0005604375 kernel: zbud: loaded
Feb  1 09:16:58 np0005604375 kernel: integrity: Platform Keyring initialized
Feb  1 09:16:58 np0005604375 kernel: integrity: Machine keyring initialized
Feb  1 09:16:58 np0005604375 kernel: Freeing initrd memory: 88000K
Feb  1 09:16:58 np0005604375 kernel: NET: Registered PF_ALG protocol family
Feb  1 09:16:58 np0005604375 kernel: xor: automatically using best checksumming function   avx       
Feb  1 09:16:58 np0005604375 kernel: Key type asymmetric registered
Feb  1 09:16:58 np0005604375 kernel: Asymmetric key parser 'x509' registered
Feb  1 09:16:58 np0005604375 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb  1 09:16:58 np0005604375 kernel: io scheduler mq-deadline registered
Feb  1 09:16:58 np0005604375 kernel: io scheduler kyber registered
Feb  1 09:16:58 np0005604375 kernel: io scheduler bfq registered
Feb  1 09:16:58 np0005604375 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb  1 09:16:58 np0005604375 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb  1 09:16:58 np0005604375 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb  1 09:16:58 np0005604375 kernel: ACPI: button: Power Button [PWRF]
Feb  1 09:16:58 np0005604375 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb  1 09:16:58 np0005604375 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb  1 09:16:58 np0005604375 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb  1 09:16:58 np0005604375 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb  1 09:16:58 np0005604375 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb  1 09:16:58 np0005604375 kernel: Non-volatile memory driver v1.3
Feb  1 09:16:58 np0005604375 kernel: rdac: device handler registered
Feb  1 09:16:58 np0005604375 kernel: hp_sw: device handler registered
Feb  1 09:16:58 np0005604375 kernel: emc: device handler registered
Feb  1 09:16:58 np0005604375 kernel: alua: device handler registered
Feb  1 09:16:58 np0005604375 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb  1 09:16:58 np0005604375 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb  1 09:16:58 np0005604375 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb  1 09:16:58 np0005604375 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb  1 09:16:58 np0005604375 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb  1 09:16:58 np0005604375 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb  1 09:16:58 np0005604375 kernel: usb usb1: Product: UHCI Host Controller
Feb  1 09:16:58 np0005604375 kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Feb  1 09:16:58 np0005604375 kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb  1 09:16:58 np0005604375 kernel: hub 1-0:1.0: USB hub found
Feb  1 09:16:58 np0005604375 kernel: hub 1-0:1.0: 2 ports detected
Feb  1 09:16:58 np0005604375 kernel: usbcore: registered new interface driver usbserial_generic
Feb  1 09:16:58 np0005604375 kernel: usbserial: USB Serial support registered for generic
Feb  1 09:16:58 np0005604375 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb  1 09:16:58 np0005604375 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb  1 09:16:58 np0005604375 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb  1 09:16:58 np0005604375 kernel: mousedev: PS/2 mouse device common for all mice
Feb  1 09:16:58 np0005604375 kernel: rtc_cmos 00:04: RTC can wake from S4
Feb  1 09:16:58 np0005604375 kernel: rtc_cmos 00:04: registered as rtc0
Feb  1 09:16:58 np0005604375 kernel: rtc_cmos 00:04: setting system clock to 2026-02-01T14:16:57 UTC (1769955417)
Feb  1 09:16:58 np0005604375 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb  1 09:16:58 np0005604375 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb  1 09:16:58 np0005604375 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb  1 09:16:58 np0005604375 kernel: usbcore: registered new interface driver usbhid
Feb  1 09:16:58 np0005604375 kernel: usbhid: USB HID core driver
Feb  1 09:16:58 np0005604375 kernel: drop_monitor: Initializing network drop monitor service
Feb  1 09:16:58 np0005604375 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb  1 09:16:58 np0005604375 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb  1 09:16:58 np0005604375 kernel: Initializing XFRM netlink socket
Feb  1 09:16:58 np0005604375 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb  1 09:16:58 np0005604375 kernel: NET: Registered PF_INET6 protocol family
Feb  1 09:16:58 np0005604375 kernel: Segment Routing with IPv6
Feb  1 09:16:58 np0005604375 kernel: NET: Registered PF_PACKET protocol family
Feb  1 09:16:58 np0005604375 kernel: mpls_gso: MPLS GSO support
Feb  1 09:16:58 np0005604375 kernel: IPI shorthand broadcast: enabled
Feb  1 09:16:58 np0005604375 kernel: AVX2 version of gcm_enc/dec engaged.
Feb  1 09:16:58 np0005604375 kernel: AES CTR mode by8 optimization enabled
Feb  1 09:16:58 np0005604375 kernel: sched_clock: Marking stable (889001740, 153862790)->(1136033970, -93169440)
Feb  1 09:16:58 np0005604375 kernel: registered taskstats version 1
Feb  1 09:16:58 np0005604375 kernel: Loading compiled-in X.509 certificates
Feb  1 09:16:58 np0005604375 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb  1 09:16:58 np0005604375 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb  1 09:16:58 np0005604375 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb  1 09:16:58 np0005604375 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb  1 09:16:58 np0005604375 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb  1 09:16:58 np0005604375 kernel: Demotion targets for Node 0: null
Feb  1 09:16:58 np0005604375 kernel: page_owner is disabled
Feb  1 09:16:58 np0005604375 kernel: Key type .fscrypt registered
Feb  1 09:16:58 np0005604375 kernel: Key type fscrypt-provisioning registered
Feb  1 09:16:58 np0005604375 kernel: Key type big_key registered
Feb  1 09:16:58 np0005604375 kernel: Key type encrypted registered
Feb  1 09:16:58 np0005604375 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb  1 09:16:58 np0005604375 kernel: Loading compiled-in module X.509 certificates
Feb  1 09:16:58 np0005604375 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb  1 09:16:58 np0005604375 kernel: ima: Allocated hash algorithm: sha256
Feb  1 09:16:58 np0005604375 kernel: ima: No architecture policies found
Feb  1 09:16:58 np0005604375 kernel: evm: Initialising EVM extended attributes:
Feb  1 09:16:58 np0005604375 kernel: evm: security.selinux
Feb  1 09:16:58 np0005604375 kernel: evm: security.SMACK64 (disabled)
Feb  1 09:16:58 np0005604375 kernel: evm: security.SMACK64EXEC (disabled)
Feb  1 09:16:58 np0005604375 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb  1 09:16:58 np0005604375 kernel: evm: security.SMACK64MMAP (disabled)
Feb  1 09:16:58 np0005604375 kernel: evm: security.apparmor (disabled)
Feb  1 09:16:58 np0005604375 kernel: evm: security.ima
Feb  1 09:16:58 np0005604375 kernel: evm: security.capability
Feb  1 09:16:58 np0005604375 kernel: evm: HMAC attrs: 0x1
Feb  1 09:16:58 np0005604375 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb  1 09:16:58 np0005604375 kernel: Running certificate verification RSA selftest
Feb  1 09:16:58 np0005604375 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb  1 09:16:58 np0005604375 kernel: Running certificate verification ECDSA selftest
Feb  1 09:16:58 np0005604375 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb  1 09:16:58 np0005604375 kernel: clk: Disabling unused clocks
Feb  1 09:16:58 np0005604375 kernel: Freeing unused decrypted memory: 2028K
Feb  1 09:16:58 np0005604375 kernel: Freeing unused kernel image (initmem) memory: 4196K
Feb  1 09:16:58 np0005604375 kernel: Write protecting the kernel read-only data: 30720k
Feb  1 09:16:58 np0005604375 kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Feb  1 09:16:58 np0005604375 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb  1 09:16:58 np0005604375 kernel: Run /init as init process
Feb  1 09:16:58 np0005604375 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  1 09:16:58 np0005604375 systemd: Detected virtualization kvm.
Feb  1 09:16:58 np0005604375 systemd: Detected architecture x86-64.
Feb  1 09:16:58 np0005604375 systemd: Running in initrd.
Feb  1 09:16:58 np0005604375 systemd: No hostname configured, using default hostname.
Feb  1 09:16:58 np0005604375 systemd: Hostname set to <localhost>.
Feb  1 09:16:58 np0005604375 systemd: Initializing machine ID from VM UUID.
Feb  1 09:16:58 np0005604375 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb  1 09:16:58 np0005604375 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb  1 09:16:58 np0005604375 kernel: usb 1-1: Product: QEMU USB Tablet
Feb  1 09:16:58 np0005604375 kernel: usb 1-1: Manufacturer: QEMU
Feb  1 09:16:58 np0005604375 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb  1 09:16:58 np0005604375 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb  1 09:16:58 np0005604375 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb  1 09:16:58 np0005604375 systemd: Queued start job for default target Initrd Default Target.
Feb  1 09:16:58 np0005604375 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb  1 09:16:58 np0005604375 systemd: Reached target Local Encrypted Volumes.
Feb  1 09:16:58 np0005604375 systemd: Reached target Initrd /usr File System.
Feb  1 09:16:58 np0005604375 systemd: Reached target Local File Systems.
Feb  1 09:16:58 np0005604375 systemd: Reached target Path Units.
Feb  1 09:16:58 np0005604375 systemd: Reached target Slice Units.
Feb  1 09:16:58 np0005604375 systemd: Reached target Swaps.
Feb  1 09:16:58 np0005604375 systemd: Reached target Timer Units.
Feb  1 09:16:58 np0005604375 systemd: Listening on D-Bus System Message Bus Socket.
Feb  1 09:16:58 np0005604375 systemd: Listening on Journal Socket (/dev/log).
Feb  1 09:16:58 np0005604375 systemd: Listening on Journal Socket.
Feb  1 09:16:58 np0005604375 systemd: Listening on udev Control Socket.
Feb  1 09:16:58 np0005604375 systemd: Listening on udev Kernel Socket.
Feb  1 09:16:58 np0005604375 systemd: Reached target Socket Units.
Feb  1 09:16:58 np0005604375 systemd: Starting Create List of Static Device Nodes...
Feb  1 09:16:58 np0005604375 systemd: Starting Journal Service...
Feb  1 09:16:58 np0005604375 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb  1 09:16:58 np0005604375 systemd: Starting Apply Kernel Variables...
Feb  1 09:16:58 np0005604375 systemd: Starting Create System Users...
Feb  1 09:16:58 np0005604375 systemd: Starting Setup Virtual Console...
Feb  1 09:16:58 np0005604375 systemd: Finished Create List of Static Device Nodes.
Feb  1 09:16:58 np0005604375 systemd: Finished Apply Kernel Variables.
Feb  1 09:16:58 np0005604375 systemd: Finished Create System Users.
Feb  1 09:16:58 np0005604375 systemd: Starting Create Static Device Nodes in /dev...
Feb  1 09:16:58 np0005604375 systemd-journald[305]: Journal started
Feb  1 09:16:58 np0005604375 systemd-journald[305]: Runtime Journal (/run/log/journal/072bb88ed455426ca85083903b041dc8) is 8.0M, max 153.6M, 145.6M free.
Feb  1 09:16:58 np0005604375 systemd-sysusers[310]: Creating group 'users' with GID 100.
Feb  1 09:16:58 np0005604375 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Feb  1 09:16:58 np0005604375 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb  1 09:16:58 np0005604375 systemd: Started Journal Service.
Feb  1 09:16:58 np0005604375 systemd[1]: Starting Create Volatile Files and Directories...
Feb  1 09:16:58 np0005604375 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb  1 09:16:58 np0005604375 systemd[1]: Finished Create Volatile Files and Directories.
Feb  1 09:16:58 np0005604375 systemd[1]: Finished Setup Virtual Console.
Feb  1 09:16:58 np0005604375 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb  1 09:16:58 np0005604375 systemd[1]: Starting dracut cmdline hook...
Feb  1 09:16:58 np0005604375 dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Feb  1 09:16:58 np0005604375 dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb  1 09:16:58 np0005604375 systemd[1]: Finished dracut cmdline hook.
Feb  1 09:16:58 np0005604375 systemd[1]: Starting dracut pre-udev hook...
Feb  1 09:16:58 np0005604375 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb  1 09:16:58 np0005604375 kernel: device-mapper: uevent: version 1.0.3
Feb  1 09:16:58 np0005604375 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb  1 09:16:58 np0005604375 kernel: RPC: Registered named UNIX socket transport module.
Feb  1 09:16:58 np0005604375 kernel: RPC: Registered udp transport module.
Feb  1 09:16:58 np0005604375 kernel: RPC: Registered tcp transport module.
Feb  1 09:16:58 np0005604375 kernel: RPC: Registered tcp-with-tls transport module.
Feb  1 09:16:58 np0005604375 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb  1 09:16:58 np0005604375 rpc.statd[443]: Version 2.5.4 starting
Feb  1 09:16:58 np0005604375 rpc.statd[443]: Initializing NSM state
Feb  1 09:16:58 np0005604375 rpc.idmapd[448]: Setting log level to 0
Feb  1 09:16:58 np0005604375 systemd[1]: Finished dracut pre-udev hook.
Feb  1 09:16:58 np0005604375 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb  1 09:16:58 np0005604375 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Feb  1 09:16:58 np0005604375 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb  1 09:16:58 np0005604375 systemd[1]: Starting dracut pre-trigger hook...
Feb  1 09:16:58 np0005604375 systemd[1]: Finished dracut pre-trigger hook.
Feb  1 09:16:58 np0005604375 systemd[1]: Starting Coldplug All udev Devices...
Feb  1 09:16:58 np0005604375 systemd[1]: Created slice Slice /system/modprobe.
Feb  1 09:16:58 np0005604375 systemd[1]: Starting Load Kernel Module configfs...
Feb  1 09:16:58 np0005604375 systemd[1]: Finished Coldplug All udev Devices.
Feb  1 09:16:58 np0005604375 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  1 09:16:58 np0005604375 systemd[1]: Finished Load Kernel Module configfs.
Feb  1 09:16:58 np0005604375 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb  1 09:16:58 np0005604375 systemd[1]: Reached target Network.
Feb  1 09:16:58 np0005604375 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb  1 09:16:58 np0005604375 systemd[1]: Starting dracut initqueue hook...
Feb  1 09:16:58 np0005604375 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb  1 09:16:58 np0005604375 kernel: scsi host0: ata_piix
Feb  1 09:16:58 np0005604375 kernel: scsi host1: ata_piix
Feb  1 09:16:58 np0005604375 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb  1 09:16:58 np0005604375 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb  1 09:16:58 np0005604375 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb  1 09:16:58 np0005604375 kernel: vda: vda1
Feb  1 09:16:58 np0005604375 kernel: ata1: found unknown device (class 0)
Feb  1 09:16:58 np0005604375 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb  1 09:16:58 np0005604375 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb  1 09:16:58 np0005604375 systemd-udevd[475]: Network interface NamePolicy= disabled on kernel command line.
Feb  1 09:16:58 np0005604375 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb  1 09:16:58 np0005604375 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb  1 09:16:58 np0005604375 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb  1 09:16:58 np0005604375 systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb  1 09:16:58 np0005604375 systemd[1]: Reached target Initrd Root Device.
Feb  1 09:16:58 np0005604375 systemd[1]: Finished dracut initqueue hook.
Feb  1 09:16:58 np0005604375 systemd[1]: Reached target Preparation for Remote File Systems.
Feb  1 09:16:58 np0005604375 systemd[1]: Reached target Remote Encrypted Volumes.
Feb  1 09:16:58 np0005604375 systemd[1]: Reached target Remote File Systems.
Feb  1 09:16:58 np0005604375 systemd[1]: Starting dracut pre-mount hook...
Feb  1 09:16:58 np0005604375 systemd[1]: Mounting Kernel Configuration File System...
Feb  1 09:16:59 np0005604375 systemd[1]: Finished dracut pre-mount hook.
Feb  1 09:16:59 np0005604375 systemd[1]: Mounted Kernel Configuration File System.
Feb  1 09:16:59 np0005604375 systemd[1]: Reached target System Initialization.
Feb  1 09:16:59 np0005604375 systemd[1]: Reached target Basic System.
Feb  1 09:16:59 np0005604375 systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Feb  1 09:16:59 np0005604375 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Feb  1 09:16:59 np0005604375 systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb  1 09:16:59 np0005604375 systemd[1]: Mounting /sysroot...
Feb  1 09:16:59 np0005604375 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb  1 09:16:59 np0005604375 kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Feb  1 09:16:59 np0005604375 kernel: XFS (vda1): Ending clean mount
Feb  1 09:16:59 np0005604375 systemd[1]: Mounted /sysroot.
Feb  1 09:16:59 np0005604375 systemd[1]: Reached target Initrd Root File System.
Feb  1 09:16:59 np0005604375 systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb  1 09:16:59 np0005604375 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb  1 09:16:59 np0005604375 systemd[1]: Reached target Initrd File Systems.
Feb  1 09:16:59 np0005604375 systemd[1]: Reached target Initrd Default Target.
Feb  1 09:16:59 np0005604375 systemd[1]: Starting dracut mount hook...
Feb  1 09:16:59 np0005604375 systemd[1]: Finished dracut mount hook.
Feb  1 09:16:59 np0005604375 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb  1 09:16:59 np0005604375 rpc.idmapd[448]: exiting on signal 15
Feb  1 09:16:59 np0005604375 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb  1 09:16:59 np0005604375 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Network.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Remote Encrypted Volumes.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Timer Units.
Feb  1 09:16:59 np0005604375 systemd[1]: dbus.socket: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Closed D-Bus System Message Bus Socket.
Feb  1 09:16:59 np0005604375 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Initrd Default Target.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Basic System.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Initrd Root Device.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Initrd /usr File System.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Path Units.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Remote File Systems.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Preparation for Remote File Systems.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Slice Units.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Socket Units.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target System Initialization.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Local File Systems.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Swaps.
Feb  1 09:16:59 np0005604375 systemd[1]: dracut-mount.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped dracut mount hook.
Feb  1 09:16:59 np0005604375 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped dracut pre-mount hook.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped target Local Encrypted Volumes.
Feb  1 09:16:59 np0005604375 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb  1 09:16:59 np0005604375 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped dracut initqueue hook.
Feb  1 09:16:59 np0005604375 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped Apply Kernel Variables.
Feb  1 09:16:59 np0005604375 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped Create Volatile Files and Directories.
Feb  1 09:16:59 np0005604375 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped Coldplug All udev Devices.
Feb  1 09:16:59 np0005604375 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped dracut pre-trigger hook.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb  1 09:16:59 np0005604375 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped Setup Virtual Console.
Feb  1 09:16:59 np0005604375 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb  1 09:16:59 np0005604375 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb  1 09:16:59 np0005604375 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Closed udev Control Socket.
Feb  1 09:16:59 np0005604375 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Closed udev Kernel Socket.
Feb  1 09:16:59 np0005604375 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped dracut pre-udev hook.
Feb  1 09:16:59 np0005604375 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped dracut cmdline hook.
Feb  1 09:16:59 np0005604375 systemd[1]: Starting Cleanup udev Database...
Feb  1 09:16:59 np0005604375 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb  1 09:16:59 np0005604375 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped Create List of Static Device Nodes.
Feb  1 09:16:59 np0005604375 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Stopped Create System Users.
Feb  1 09:16:59 np0005604375 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb  1 09:16:59 np0005604375 systemd[1]: Finished Cleanup udev Database.
Feb  1 09:16:59 np0005604375 systemd[1]: Reached target Switch Root.
Feb  1 09:16:59 np0005604375 systemd[1]: Starting Switch Root...
Feb  1 09:16:59 np0005604375 systemd[1]: Switching root.
Feb  1 09:16:59 np0005604375 systemd-journald[305]: Journal stopped
Feb  1 09:17:00 np0005604375 systemd-journald: Received SIGTERM from PID 1 (systemd).
Feb  1 09:17:00 np0005604375 kernel: audit: type=1404 audit(1769955419.901:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Feb  1 09:17:00 np0005604375 kernel: SELinux:  policy capability network_peer_controls=1
Feb  1 09:17:00 np0005604375 kernel: SELinux:  policy capability open_perms=1
Feb  1 09:17:00 np0005604375 kernel: SELinux:  policy capability extended_socket_class=1
Feb  1 09:17:00 np0005604375 kernel: SELinux:  policy capability always_check_network=0
Feb  1 09:17:00 np0005604375 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  1 09:17:00 np0005604375 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  1 09:17:00 np0005604375 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  1 09:17:00 np0005604375 kernel: audit: type=1403 audit(1769955419.999:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb  1 09:17:00 np0005604375 systemd: Successfully loaded SELinux policy in 100.948ms.
Feb  1 09:17:00 np0005604375 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 36.725ms.
Feb  1 09:17:00 np0005604375 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  1 09:17:00 np0005604375 systemd: Detected virtualization kvm.
Feb  1 09:17:00 np0005604375 systemd: Detected architecture x86-64.
Feb  1 09:17:00 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:17:00 np0005604375 systemd: initrd-switch-root.service: Deactivated successfully.
Feb  1 09:17:00 np0005604375 systemd: Stopped Switch Root.
Feb  1 09:17:00 np0005604375 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb  1 09:17:00 np0005604375 systemd: Created slice Slice /system/getty.
Feb  1 09:17:00 np0005604375 systemd: Created slice Slice /system/serial-getty.
Feb  1 09:17:00 np0005604375 systemd: Created slice Slice /system/sshd-keygen.
Feb  1 09:17:00 np0005604375 systemd: Created slice User and Session Slice.
Feb  1 09:17:00 np0005604375 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb  1 09:17:00 np0005604375 systemd: Started Forward Password Requests to Wall Directory Watch.
Feb  1 09:17:00 np0005604375 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Feb  1 09:17:00 np0005604375 systemd: Reached target Local Encrypted Volumes.
Feb  1 09:17:00 np0005604375 systemd: Stopped target Switch Root.
Feb  1 09:17:00 np0005604375 systemd: Stopped target Initrd File Systems.
Feb  1 09:17:00 np0005604375 systemd: Stopped target Initrd Root File System.
Feb  1 09:17:00 np0005604375 systemd: Reached target Local Integrity Protected Volumes.
Feb  1 09:17:00 np0005604375 systemd: Reached target Path Units.
Feb  1 09:17:00 np0005604375 systemd: Reached target rpc_pipefs.target.
Feb  1 09:17:00 np0005604375 systemd: Reached target Slice Units.
Feb  1 09:17:00 np0005604375 systemd: Reached target Swaps.
Feb  1 09:17:00 np0005604375 systemd: Reached target Local Verity Protected Volumes.
Feb  1 09:17:00 np0005604375 systemd: Listening on RPCbind Server Activation Socket.
Feb  1 09:17:00 np0005604375 systemd: Reached target RPC Port Mapper.
Feb  1 09:17:00 np0005604375 systemd: Listening on Process Core Dump Socket.
Feb  1 09:17:00 np0005604375 systemd: Listening on initctl Compatibility Named Pipe.
Feb  1 09:17:00 np0005604375 systemd: Listening on udev Control Socket.
Feb  1 09:17:00 np0005604375 systemd: Listening on udev Kernel Socket.
Feb  1 09:17:00 np0005604375 systemd: Mounting Huge Pages File System...
Feb  1 09:17:00 np0005604375 systemd: Mounting POSIX Message Queue File System...
Feb  1 09:17:00 np0005604375 systemd: Mounting Kernel Debug File System...
Feb  1 09:17:00 np0005604375 systemd: Mounting Kernel Trace File System...
Feb  1 09:17:00 np0005604375 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb  1 09:17:00 np0005604375 systemd: Starting Create List of Static Device Nodes...
Feb  1 09:17:00 np0005604375 systemd: Starting Load Kernel Module configfs...
Feb  1 09:17:00 np0005604375 systemd: Starting Load Kernel Module drm...
Feb  1 09:17:00 np0005604375 systemd: Starting Load Kernel Module efi_pstore...
Feb  1 09:17:00 np0005604375 systemd: Starting Load Kernel Module fuse...
Feb  1 09:17:00 np0005604375 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Feb  1 09:17:00 np0005604375 systemd: systemd-fsck-root.service: Deactivated successfully.
Feb  1 09:17:00 np0005604375 systemd: Stopped File System Check on Root Device.
Feb  1 09:17:00 np0005604375 systemd: Stopped Journal Service.
Feb  1 09:17:00 np0005604375 systemd: Starting Journal Service...
Feb  1 09:17:00 np0005604375 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb  1 09:17:00 np0005604375 systemd: Starting Generate network units from Kernel command line...
Feb  1 09:17:00 np0005604375 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  1 09:17:00 np0005604375 kernel: fuse: init (API version 7.37)
Feb  1 09:17:00 np0005604375 systemd: Starting Remount Root and Kernel File Systems...
Feb  1 09:17:00 np0005604375 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Feb  1 09:17:00 np0005604375 systemd: Starting Apply Kernel Variables...
Feb  1 09:17:00 np0005604375 systemd: Starting Coldplug All udev Devices...
Feb  1 09:17:00 np0005604375 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Feb  1 09:17:00 np0005604375 systemd: Mounted Huge Pages File System.
Feb  1 09:17:00 np0005604375 systemd-journald[678]: Journal started
Feb  1 09:17:00 np0005604375 systemd-journald[678]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb  1 09:17:00 np0005604375 systemd: Mounted POSIX Message Queue File System.
Feb  1 09:17:00 np0005604375 systemd[1]: Queued start job for default target Multi-User System.
Feb  1 09:17:00 np0005604375 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb  1 09:17:00 np0005604375 systemd: Started Journal Service.
Feb  1 09:17:00 np0005604375 systemd[1]: Mounted Kernel Debug File System.
Feb  1 09:17:00 np0005604375 systemd[1]: Mounted Kernel Trace File System.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Create List of Static Device Nodes.
Feb  1 09:17:00 np0005604375 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Load Kernel Module configfs.
Feb  1 09:17:00 np0005604375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Load Kernel Module efi_pstore.
Feb  1 09:17:00 np0005604375 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Load Kernel Module fuse.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Generate network units from Kernel command line.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Remount Root and Kernel File Systems.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Apply Kernel Variables.
Feb  1 09:17:00 np0005604375 systemd[1]: Mounting FUSE Control File System...
Feb  1 09:17:00 np0005604375 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb  1 09:17:00 np0005604375 systemd[1]: Starting Rebuild Hardware Database...
Feb  1 09:17:00 np0005604375 systemd[1]: Starting Flush Journal to Persistent Storage...
Feb  1 09:17:00 np0005604375 kernel: ACPI: bus type drm_connector registered
Feb  1 09:17:00 np0005604375 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb  1 09:17:00 np0005604375 systemd[1]: Starting Load/Save OS Random Seed...
Feb  1 09:17:00 np0005604375 systemd[1]: Starting Create System Users...
Feb  1 09:17:00 np0005604375 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Load Kernel Module drm.
Feb  1 09:17:00 np0005604375 systemd[1]: Mounted FUSE Control File System.
Feb  1 09:17:00 np0005604375 systemd-journald[678]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb  1 09:17:00 np0005604375 systemd-journald[678]: Received client request to flush runtime journal.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Flush Journal to Persistent Storage.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Load/Save OS Random Seed.
Feb  1 09:17:00 np0005604375 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Create System Users.
Feb  1 09:17:00 np0005604375 systemd[1]: Starting Create Static Device Nodes in /dev...
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Coldplug All udev Devices.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb  1 09:17:00 np0005604375 systemd[1]: Reached target Preparation for Local File Systems.
Feb  1 09:17:00 np0005604375 systemd[1]: Reached target Local File Systems.
Feb  1 09:17:00 np0005604375 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Feb  1 09:17:00 np0005604375 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Feb  1 09:17:00 np0005604375 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb  1 09:17:00 np0005604375 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Feb  1 09:17:00 np0005604375 systemd[1]: Starting Automatic Boot Loader Update...
Feb  1 09:17:00 np0005604375 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Feb  1 09:17:00 np0005604375 systemd[1]: Starting Create Volatile Files and Directories...
Feb  1 09:17:00 np0005604375 bootctl[695]: Couldn't find EFI system partition, skipping.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Automatic Boot Loader Update.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Create Volatile Files and Directories.
Feb  1 09:17:00 np0005604375 systemd[1]: Starting Security Auditing Service...
Feb  1 09:17:00 np0005604375 systemd[1]: Starting RPC Bind...
Feb  1 09:17:00 np0005604375 systemd[1]: Starting Rebuild Journal Catalog...
Feb  1 09:17:00 np0005604375 auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Feb  1 09:17:00 np0005604375 auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Rebuild Journal Catalog.
Feb  1 09:17:00 np0005604375 systemd[1]: Started RPC Bind.
Feb  1 09:17:00 np0005604375 augenrules[706]: /sbin/augenrules: No change
Feb  1 09:17:00 np0005604375 augenrules[721]: No rules
Feb  1 09:17:00 np0005604375 augenrules[721]: enabled 1
Feb  1 09:17:00 np0005604375 augenrules[721]: failure 1
Feb  1 09:17:00 np0005604375 augenrules[721]: pid 701
Feb  1 09:17:00 np0005604375 augenrules[721]: rate_limit 0
Feb  1 09:17:00 np0005604375 augenrules[721]: backlog_limit 8192
Feb  1 09:17:00 np0005604375 augenrules[721]: lost 0
Feb  1 09:17:00 np0005604375 augenrules[721]: backlog 3
Feb  1 09:17:00 np0005604375 augenrules[721]: backlog_wait_time 60000
Feb  1 09:17:00 np0005604375 augenrules[721]: backlog_wait_time_actual 0
Feb  1 09:17:00 np0005604375 augenrules[721]: enabled 1
Feb  1 09:17:00 np0005604375 augenrules[721]: failure 1
Feb  1 09:17:00 np0005604375 augenrules[721]: pid 701
Feb  1 09:17:00 np0005604375 augenrules[721]: rate_limit 0
Feb  1 09:17:00 np0005604375 augenrules[721]: backlog_limit 8192
Feb  1 09:17:00 np0005604375 augenrules[721]: lost 0
Feb  1 09:17:00 np0005604375 augenrules[721]: backlog 4
Feb  1 09:17:00 np0005604375 augenrules[721]: backlog_wait_time 60000
Feb  1 09:17:00 np0005604375 augenrules[721]: backlog_wait_time_actual 0
Feb  1 09:17:00 np0005604375 augenrules[721]: enabled 1
Feb  1 09:17:00 np0005604375 augenrules[721]: failure 1
Feb  1 09:17:00 np0005604375 augenrules[721]: pid 701
Feb  1 09:17:00 np0005604375 augenrules[721]: rate_limit 0
Feb  1 09:17:00 np0005604375 augenrules[721]: backlog_limit 8192
Feb  1 09:17:00 np0005604375 augenrules[721]: lost 0
Feb  1 09:17:00 np0005604375 augenrules[721]: backlog 4
Feb  1 09:17:00 np0005604375 augenrules[721]: backlog_wait_time 60000
Feb  1 09:17:00 np0005604375 augenrules[721]: backlog_wait_time_actual 0
Feb  1 09:17:00 np0005604375 systemd[1]: Started Security Auditing Service.
Feb  1 09:17:00 np0005604375 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Feb  1 09:17:00 np0005604375 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Feb  1 09:17:01 np0005604375 systemd[1]: Finished Rebuild Hardware Database.
Feb  1 09:17:01 np0005604375 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb  1 09:17:01 np0005604375 systemd[1]: Starting Update is Completed...
Feb  1 09:17:01 np0005604375 systemd[1]: Finished Update is Completed.
Feb  1 09:17:01 np0005604375 systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Feb  1 09:17:01 np0005604375 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb  1 09:17:01 np0005604375 systemd[1]: Reached target System Initialization.
Feb  1 09:17:01 np0005604375 systemd[1]: Started dnf makecache --timer.
Feb  1 09:17:01 np0005604375 systemd[1]: Started Daily rotation of log files.
Feb  1 09:17:01 np0005604375 systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb  1 09:17:01 np0005604375 systemd[1]: Reached target Timer Units.
Feb  1 09:17:01 np0005604375 systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb  1 09:17:01 np0005604375 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Feb  1 09:17:01 np0005604375 systemd[1]: Reached target Socket Units.
Feb  1 09:17:01 np0005604375 systemd[1]: Starting D-Bus System Message Bus...
Feb  1 09:17:01 np0005604375 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  1 09:17:01 np0005604375 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Feb  1 09:17:01 np0005604375 systemd[1]: Starting Load Kernel Module configfs...
Feb  1 09:17:01 np0005604375 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  1 09:17:01 np0005604375 systemd[1]: Finished Load Kernel Module configfs.
Feb  1 09:17:01 np0005604375 systemd-udevd[738]: Network interface NamePolicy= disabled on kernel command line.
Feb  1 09:17:01 np0005604375 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Feb  1 09:17:01 np0005604375 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb  1 09:17:01 np0005604375 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Feb  1 09:17:01 np0005604375 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Feb  1 09:17:01 np0005604375 systemd[1]: Started D-Bus System Message Bus.
Feb  1 09:17:01 np0005604375 systemd[1]: Reached target Basic System.
Feb  1 09:17:01 np0005604375 dbus-broker-lau[765]: Ready
Feb  1 09:17:01 np0005604375 systemd[1]: Starting NTP client/server...
Feb  1 09:17:01 np0005604375 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Feb  1 09:17:01 np0005604375 systemd[1]: Starting Restore /run/initramfs on shutdown...
Feb  1 09:17:01 np0005604375 systemd[1]: Starting IPv4 firewall with iptables...
Feb  1 09:17:01 np0005604375 systemd[1]: Started irqbalance daemon.
Feb  1 09:17:01 np0005604375 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Feb  1 09:17:01 np0005604375 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  1 09:17:01 np0005604375 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  1 09:17:01 np0005604375 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  1 09:17:01 np0005604375 systemd[1]: Reached target sshd-keygen.target.
Feb  1 09:17:01 np0005604375 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Feb  1 09:17:01 np0005604375 systemd[1]: Reached target User and Group Name Lookups.
Feb  1 09:17:01 np0005604375 systemd[1]: Starting User Login Management...
Feb  1 09:17:01 np0005604375 systemd[1]: Finished Restore /run/initramfs on shutdown.
Feb  1 09:17:01 np0005604375 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Feb  1 09:17:01 np0005604375 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb  1 09:17:01 np0005604375 systemd-logind[786]: New seat seat0.
Feb  1 09:17:01 np0005604375 systemd[1]: Started User Login Management.
Feb  1 09:17:01 np0005604375 chronyd[800]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb  1 09:17:01 np0005604375 chronyd[800]: Loaded 0 symmetric keys
Feb  1 09:17:01 np0005604375 chronyd[800]: Using right/UTC timezone to obtain leap second data
Feb  1 09:17:01 np0005604375 chronyd[800]: Loaded seccomp filter (level 2)
Feb  1 09:17:01 np0005604375 systemd[1]: Started NTP client/server.
Feb  1 09:17:01 np0005604375 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Feb  1 09:17:01 np0005604375 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Feb  1 09:17:01 np0005604375 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb  1 09:17:01 np0005604375 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb  1 09:17:01 np0005604375 kernel: Console: switching to colour dummy device 80x25
Feb  1 09:17:01 np0005604375 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb  1 09:17:01 np0005604375 kernel: [drm] features: -context_init
Feb  1 09:17:01 np0005604375 kernel: [drm] number of scanouts: 1
Feb  1 09:17:01 np0005604375 kernel: [drm] number of cap sets: 0
Feb  1 09:17:01 np0005604375 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb  1 09:17:01 np0005604375 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb  1 09:17:01 np0005604375 kernel: Console: switching to colour frame buffer device 128x48
Feb  1 09:17:01 np0005604375 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb  1 09:17:01 np0005604375 kernel: kvm_amd: TSC scaling supported
Feb  1 09:17:01 np0005604375 kernel: kvm_amd: Nested Virtualization enabled
Feb  1 09:17:01 np0005604375 kernel: kvm_amd: Nested Paging enabled
Feb  1 09:17:01 np0005604375 kernel: kvm_amd: LBR virtualization supported
Feb  1 09:17:01 np0005604375 iptables.init[779]: iptables: Applying firewall rules: [  OK  ]
Feb  1 09:17:01 np0005604375 systemd[1]: Finished IPv4 firewall with iptables.
Feb  1 09:17:01 np0005604375 cloud-init[837]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sun, 01 Feb 2026 14:17:01 +0000. Up 5.08 seconds.
Feb  1 09:17:01 np0005604375 systemd[1]: run-cloud\x2dinit-tmp-tmph5o6a1oq.mount: Deactivated successfully.
Feb  1 09:17:01 np0005604375 systemd[1]: Starting Hostname Service...
Feb  1 09:17:02 np0005604375 systemd[1]: Started Hostname Service.
Feb  1 09:17:02 np0005604375 systemd-hostnamed[851]: Hostname set to <np0005604375.novalocal> (static)
Feb  1 09:17:02 np0005604375 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Feb  1 09:17:02 np0005604375 systemd[1]: Reached target Preparation for Network.
Feb  1 09:17:02 np0005604375 systemd[1]: Starting Network Manager...
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.1953] NetworkManager (version 1.54.3-2.el9) is starting... (boot:bc6eed0e-afac-49e7-b313-e00c329dc99a)
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.1957] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2120] manager[0x563dd8797000]: monitoring kernel firmware directory '/lib/firmware'.
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2163] hostname: hostname: using hostnamed
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2164] hostname: static hostname changed from (none) to "np0005604375.novalocal"
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2167] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2261] manager[0x563dd8797000]: rfkill: Wi-Fi hardware radio set enabled
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2262] manager[0x563dd8797000]: rfkill: WWAN hardware radio set enabled
Feb  1 09:17:02 np0005604375 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2335] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2336] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2336] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2337] manager: Networking is enabled by state file
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2338] settings: Loaded settings plugin: keyfile (internal)
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2360] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2381] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2394] dhcp: init: Using DHCP client 'internal'
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2398] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2407] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2417] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2425] device (lo): Activation: starting connection 'lo' (993b83ea-ade5-4a5e-93d7-372f4fe03bae)
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2431] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2434] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2453] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2456] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2458] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2459] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2461] device (eth0): carrier: link connected
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2463] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2468] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2473] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2476] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2476] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2478] manager: NetworkManager state is now CONNECTING
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2479] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:17:02 np0005604375 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2485] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2487] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  1 09:17:02 np0005604375 systemd[1]: Started Network Manager.
Feb  1 09:17:02 np0005604375 systemd[1]: Reached target Network.
Feb  1 09:17:02 np0005604375 systemd[1]: Starting Network Manager Wait Online...
Feb  1 09:17:02 np0005604375 systemd[1]: Starting GSSAPI Proxy Daemon...
Feb  1 09:17:02 np0005604375 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2649] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2652] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.2660] device (lo): Activation: successful, device activated.
Feb  1 09:17:02 np0005604375 systemd[1]: Started GSSAPI Proxy Daemon.
Feb  1 09:17:02 np0005604375 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb  1 09:17:02 np0005604375 systemd[1]: Reached target NFS client services.
Feb  1 09:17:02 np0005604375 systemd[1]: Reached target Preparation for Remote File Systems.
Feb  1 09:17:02 np0005604375 systemd[1]: Reached target Remote File Systems.
Feb  1 09:17:02 np0005604375 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.6107] dhcp4 (eth0): state changed new lease, address=38.102.83.238
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.6115] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.6129] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.6146] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.6147] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.6150] manager: NetworkManager state is now CONNECTED_SITE
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.6152] device (eth0): Activation: successful, device activated.
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.6157] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb  1 09:17:02 np0005604375 NetworkManager[855]: <info>  [1769955422.6160] manager: startup complete
Feb  1 09:17:02 np0005604375 systemd[1]: Finished Network Manager Wait Online.
Feb  1 09:17:02 np0005604375 systemd[1]: Starting Cloud-init: Network Stage...
Feb  1 09:17:02 np0005604375 cloud-init[918]: Cloud-init v. 24.4-8.el9 running 'init' at Sun, 01 Feb 2026 14:17:02 +0000. Up 6.23 seconds.
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: |  eth0  | True |        38.102.83.238        | 255.255.255.0 | global | fa:16:3e:72:09:b3 |
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fe72:9b3/64 |       .       |  link  | fa:16:3e:72:09:b3 |
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Feb  1 09:17:02 np0005604375 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Feb  1 09:17:03 np0005604375 cloud-init[918]: Generating public/private rsa key pair.
Feb  1 09:17:03 np0005604375 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb  1 09:17:03 np0005604375 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb  1 09:17:03 np0005604375 cloud-init[918]: The key fingerprint is:
Feb  1 09:17:03 np0005604375 cloud-init[918]: SHA256:A+msEEuiyQBEA3Ixk/vsCiONr46beOVvjEwYv2+OMWQ root@np0005604375.novalocal
Feb  1 09:17:03 np0005604375 cloud-init[918]: The key's randomart image is:
Feb  1 09:17:03 np0005604375 cloud-init[918]: +---[RSA 3072]----+
Feb  1 09:17:03 np0005604375 cloud-init[918]: |B+=o             |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |o.oo   .         |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |o o.  o          |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |++oo o .         |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |o.o*E o S        |
Feb  1 09:17:03 np0005604375 cloud-init[918]: | o.+*.   .       |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |= .*++           |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |+=. *++          |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |B=o..*+          |
Feb  1 09:17:03 np0005604375 cloud-init[918]: +----[SHA256]-----+
Feb  1 09:17:03 np0005604375 cloud-init[918]: Generating public/private ecdsa key pair.
Feb  1 09:17:03 np0005604375 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb  1 09:17:03 np0005604375 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb  1 09:17:03 np0005604375 cloud-init[918]: The key fingerprint is:
Feb  1 09:17:03 np0005604375 cloud-init[918]: SHA256:jnt6q868MesRE3MhlqRPNNvZO6KBM5BPOpIOa3ZjOWQ root@np0005604375.novalocal
Feb  1 09:17:03 np0005604375 cloud-init[918]: The key's randomart image is:
Feb  1 09:17:03 np0005604375 cloud-init[918]: +---[ECDSA 256]---+
Feb  1 09:17:03 np0005604375 cloud-init[918]: |     .*..        |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |   . +.= +       |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |  o o = + .      |
Feb  1 09:17:03 np0005604375 cloud-init[918]: | . = + +   .     |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |+ oE= = S o      |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |ooo..o B . .     |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |.+ *  * .        |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |o o oo *o        |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |     o@*..       |
Feb  1 09:17:03 np0005604375 cloud-init[918]: +----[SHA256]-----+
Feb  1 09:17:03 np0005604375 cloud-init[918]: Generating public/private ed25519 key pair.
Feb  1 09:17:03 np0005604375 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb  1 09:17:03 np0005604375 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb  1 09:17:03 np0005604375 cloud-init[918]: The key fingerprint is:
Feb  1 09:17:03 np0005604375 cloud-init[918]: SHA256:nt6iE9ODYt14sYfLWBAJM6sCuxdr++azWEuqgO2OGow root@np0005604375.novalocal
Feb  1 09:17:03 np0005604375 cloud-init[918]: The key's randomart image is:
Feb  1 09:17:03 np0005604375 cloud-init[918]: +--[ED25519 256]--+
Feb  1 09:17:03 np0005604375 cloud-init[918]: |    +. .         |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |     +o          |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |.   .  .         |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |.. .  . .        |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |o o  . *S+       |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |++ oo *.O..      |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |E.=.o. Bo+       |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |.* *o.o.+.       |
Feb  1 09:17:03 np0005604375 cloud-init[918]: |=o*+=o.o...      |
Feb  1 09:17:03 np0005604375 cloud-init[918]: +----[SHA256]-----+
Feb  1 09:17:04 np0005604375 sm-notify[1000]: Version 2.5.4 starting
Feb  1 09:17:04 np0005604375 systemd[1]: Finished Cloud-init: Network Stage.
Feb  1 09:17:04 np0005604375 systemd[1]: Reached target Cloud-config availability.
Feb  1 09:17:04 np0005604375 systemd[1]: Reached target Network is Online.
Feb  1 09:17:04 np0005604375 systemd[1]: Starting Cloud-init: Config Stage...
Feb  1 09:17:04 np0005604375 systemd[1]: Starting Crash recovery kernel arming...
Feb  1 09:17:04 np0005604375 systemd[1]: Starting Notify NFS peers of a restart...
Feb  1 09:17:04 np0005604375 systemd[1]: Starting System Logging Service...
Feb  1 09:17:04 np0005604375 systemd[1]: Starting OpenSSH server daemon...
Feb  1 09:17:04 np0005604375 systemd[1]: Starting Permit User Sessions...
Feb  1 09:17:04 np0005604375 systemd[1]: Started Notify NFS peers of a restart.
Feb  1 09:17:04 np0005604375 systemd[1]: Finished Permit User Sessions.
Feb  1 09:17:04 np0005604375 systemd[1]: Started Command Scheduler.
Feb  1 09:17:04 np0005604375 systemd[1]: Started Getty on tty1.
Feb  1 09:17:04 np0005604375 systemd[1]: Started Serial Getty on ttyS0.
Feb  1 09:17:04 np0005604375 systemd[1]: Reached target Login Prompts.
Feb  1 09:17:04 np0005604375 systemd[1]: Started OpenSSH server daemon.
Feb  1 09:17:04 np0005604375 rsyslogd[1001]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1001" x-info="https://www.rsyslog.com"] start
Feb  1 09:17:04 np0005604375 rsyslogd[1001]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Feb  1 09:17:04 np0005604375 systemd[1]: Started System Logging Service.
Feb  1 09:17:04 np0005604375 systemd[1]: Reached target Multi-User System.
Feb  1 09:17:04 np0005604375 systemd[1]: Starting Record Runlevel Change in UTMP...
Feb  1 09:17:04 np0005604375 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb  1 09:17:04 np0005604375 systemd[1]: Finished Record Runlevel Change in UTMP.
Feb  1 09:17:04 np0005604375 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  1 09:17:04 np0005604375 kdumpctl[1010]: kdump: No kdump initial ramdisk found.
Feb  1 09:17:04 np0005604375 kdumpctl[1010]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Feb  1 09:17:04 np0005604375 cloud-init[1151]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sun, 01 Feb 2026 14:17:04 +0000. Up 7.69 seconds.
Feb  1 09:17:04 np0005604375 systemd[1]: Finished Cloud-init: Config Stage.
Feb  1 09:17:04 np0005604375 systemd[1]: Starting Cloud-init: Final Stage...
Feb  1 09:17:04 np0005604375 dracut[1261]: dracut-057-102.git20250818.el9
Feb  1 09:17:04 np0005604375 dracut[1263]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Feb  1 09:17:04 np0005604375 cloud-init[1311]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sun, 01 Feb 2026 14:17:04 +0000. Up 8.02 seconds.
Feb  1 09:17:04 np0005604375 cloud-init[1333]: #############################################################
Feb  1 09:17:04 np0005604375 cloud-init[1334]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb  1 09:17:04 np0005604375 cloud-init[1336]: 256 SHA256:jnt6q868MesRE3MhlqRPNNvZO6KBM5BPOpIOa3ZjOWQ root@np0005604375.novalocal (ECDSA)
Feb  1 09:17:04 np0005604375 cloud-init[1338]: 256 SHA256:nt6iE9ODYt14sYfLWBAJM6sCuxdr++azWEuqgO2OGow root@np0005604375.novalocal (ED25519)
Feb  1 09:17:04 np0005604375 cloud-init[1340]: 3072 SHA256:A+msEEuiyQBEA3Ixk/vsCiONr46beOVvjEwYv2+OMWQ root@np0005604375.novalocal (RSA)
Feb  1 09:17:04 np0005604375 cloud-init[1341]: -----END SSH HOST KEY FINGERPRINTS-----
Feb  1 09:17:04 np0005604375 cloud-init[1342]: #############################################################
Feb  1 09:17:04 np0005604375 cloud-init[1311]: Cloud-init v. 24.4-8.el9 finished at Sun, 01 Feb 2026 14:17:04 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 8.19 seconds
Feb  1 09:17:04 np0005604375 systemd[1]: Finished Cloud-init: Final Stage.
Feb  1 09:17:04 np0005604375 systemd[1]: Reached target Cloud-init target.
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: memstrack is not available
Feb  1 09:17:05 np0005604375 dracut[1263]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb  1 09:17:05 np0005604375 dracut[1263]: memstrack is not available
Feb  1 09:17:05 np0005604375 dracut[1263]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb  1 09:17:05 np0005604375 dracut[1263]: *** Including module: systemd ***
Feb  1 09:17:06 np0005604375 dracut[1263]: *** Including module: fips ***
Feb  1 09:17:06 np0005604375 dracut[1263]: *** Including module: systemd-initrd ***
Feb  1 09:17:06 np0005604375 dracut[1263]: *** Including module: i18n ***
Feb  1 09:17:06 np0005604375 dracut[1263]: *** Including module: drm ***
Feb  1 09:17:06 np0005604375 dracut[1263]: *** Including module: prefixdevname ***
Feb  1 09:17:06 np0005604375 dracut[1263]: *** Including module: kernel-modules ***
Feb  1 09:17:06 np0005604375 kernel: block vda: the capability attribute has been deprecated.
Feb  1 09:17:07 np0005604375 dracut[1263]: *** Including module: kernel-modules-extra ***
Feb  1 09:17:07 np0005604375 dracut[1263]: *** Including module: qemu ***
Feb  1 09:17:07 np0005604375 dracut[1263]: *** Including module: fstab-sys ***
Feb  1 09:17:07 np0005604375 dracut[1263]: *** Including module: rootfs-block ***
Feb  1 09:17:07 np0005604375 dracut[1263]: *** Including module: terminfo ***
Feb  1 09:17:07 np0005604375 dracut[1263]: *** Including module: udev-rules ***
Feb  1 09:17:07 np0005604375 chronyd[800]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Feb  1 09:17:08 np0005604375 chronyd[800]: System clock wrong by 1.223169 seconds
Feb  1 09:17:08 np0005604375 chronyd[800]: System clock was stepped by 1.223169 seconds
Feb  1 09:17:08 np0005604375 chronyd[800]: System clock TAI offset set to 37 seconds
Feb  1 09:17:09 np0005604375 dracut[1263]: Skipping udev rule: 91-permissions.rules
Feb  1 09:17:09 np0005604375 dracut[1263]: Skipping udev rule: 80-drivers-modprobe.rules
Feb  1 09:17:09 np0005604375 dracut[1263]: *** Including module: virtiofs ***
Feb  1 09:17:09 np0005604375 dracut[1263]: *** Including module: dracut-systemd ***
Feb  1 09:17:09 np0005604375 dracut[1263]: *** Including module: usrmount ***
Feb  1 09:17:09 np0005604375 dracut[1263]: *** Including module: base ***
Feb  1 09:17:09 np0005604375 dracut[1263]: *** Including module: fs-lib ***
Feb  1 09:17:09 np0005604375 dracut[1263]: *** Including module: kdumpbase ***
Feb  1 09:17:09 np0005604375 dracut[1263]: *** Including module: microcode_ctl-fw_dir_override ***
Feb  1 09:17:09 np0005604375 dracut[1263]:  microcode_ctl module: mangling fw_dir
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: configuration "intel" is ignored
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Feb  1 09:17:09 np0005604375 dracut[1263]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Feb  1 09:17:09 np0005604375 dracut[1263]: *** Including module: openssl ***
Feb  1 09:17:09 np0005604375 dracut[1263]: *** Including module: shutdown ***
Feb  1 09:17:09 np0005604375 dracut[1263]: *** Including module: squash ***
Feb  1 09:17:09 np0005604375 dracut[1263]: *** Including modules done ***
Feb  1 09:17:09 np0005604375 dracut[1263]: *** Installing kernel module dependencies ***
Feb  1 09:17:10 np0005604375 dracut[1263]: *** Installing kernel module dependencies done ***
Feb  1 09:17:10 np0005604375 dracut[1263]: *** Resolving executable dependencies ***
Feb  1 09:17:11 np0005604375 dracut[1263]: *** Resolving executable dependencies done ***
Feb  1 09:17:11 np0005604375 dracut[1263]: *** Generating early-microcode cpio image ***
Feb  1 09:17:11 np0005604375 dracut[1263]: *** Store current command line parameters ***
Feb  1 09:17:11 np0005604375 dracut[1263]: Stored kernel commandline:
Feb  1 09:17:11 np0005604375 dracut[1263]: No dracut internal kernel commandline stored in the initramfs
Feb  1 09:17:11 np0005604375 dracut[1263]: *** Install squash loader ***
Feb  1 09:17:12 np0005604375 dracut[1263]: *** Squashing the files inside the initramfs ***
Feb  1 09:17:12 np0005604375 irqbalance[781]: Cannot change IRQ 25 affinity: Operation not permitted
Feb  1 09:17:12 np0005604375 irqbalance[781]: IRQ 25 affinity is now unmanaged
Feb  1 09:17:12 np0005604375 irqbalance[781]: Cannot change IRQ 31 affinity: Operation not permitted
Feb  1 09:17:12 np0005604375 irqbalance[781]: IRQ 31 affinity is now unmanaged
Feb  1 09:17:12 np0005604375 irqbalance[781]: Cannot change IRQ 28 affinity: Operation not permitted
Feb  1 09:17:12 np0005604375 irqbalance[781]: IRQ 28 affinity is now unmanaged
Feb  1 09:17:12 np0005604375 irqbalance[781]: Cannot change IRQ 32 affinity: Operation not permitted
Feb  1 09:17:12 np0005604375 irqbalance[781]: IRQ 32 affinity is now unmanaged
Feb  1 09:17:12 np0005604375 irqbalance[781]: Cannot change IRQ 30 affinity: Operation not permitted
Feb  1 09:17:12 np0005604375 irqbalance[781]: IRQ 30 affinity is now unmanaged
Feb  1 09:17:12 np0005604375 irqbalance[781]: Cannot change IRQ 29 affinity: Operation not permitted
Feb  1 09:17:12 np0005604375 irqbalance[781]: IRQ 29 affinity is now unmanaged
Feb  1 09:17:13 np0005604375 dracut[1263]: *** Squashing the files inside the initramfs done ***
Feb  1 09:17:13 np0005604375 dracut[1263]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Feb  1 09:17:13 np0005604375 dracut[1263]: *** Hardlinking files ***
Feb  1 09:17:13 np0005604375 dracut[1263]: *** Hardlinking files done ***
Feb  1 09:17:13 np0005604375 dracut[1263]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Feb  1 09:17:13 np0005604375 kdumpctl[1010]: kdump: kexec: loaded kdump kernel
Feb  1 09:17:13 np0005604375 kdumpctl[1010]: kdump: Starting kdump: [OK]
Feb  1 09:17:13 np0005604375 systemd[1]: Finished Crash recovery kernel arming.
Feb  1 09:17:13 np0005604375 systemd[1]: Startup finished in 1.206s (kernel) + 2.041s (initrd) + 12.548s (userspace) = 15.797s.
Feb  1 09:17:13 np0005604375 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  1 09:17:19 np0005604375 systemd[1]: Created slice User Slice of UID 1000.
Feb  1 09:17:19 np0005604375 systemd[1]: Starting User Runtime Directory /run/user/1000...
Feb  1 09:17:19 np0005604375 systemd-logind[786]: New session 1 of user zuul.
Feb  1 09:17:19 np0005604375 systemd[1]: Finished User Runtime Directory /run/user/1000.
Feb  1 09:17:19 np0005604375 systemd[1]: Starting User Manager for UID 1000...
Feb  1 09:17:19 np0005604375 systemd[4302]: Queued start job for default target Main User Target.
Feb  1 09:17:19 np0005604375 systemd[4302]: Created slice User Application Slice.
Feb  1 09:17:19 np0005604375 systemd[4302]: Started Mark boot as successful after the user session has run 2 minutes.
Feb  1 09:17:19 np0005604375 systemd[4302]: Started Daily Cleanup of User's Temporary Directories.
Feb  1 09:17:19 np0005604375 systemd[4302]: Reached target Paths.
Feb  1 09:17:19 np0005604375 systemd[4302]: Reached target Timers.
Feb  1 09:17:19 np0005604375 systemd[4302]: Starting D-Bus User Message Bus Socket...
Feb  1 09:17:19 np0005604375 systemd[4302]: Starting Create User's Volatile Files and Directories...
Feb  1 09:17:19 np0005604375 systemd[4302]: Finished Create User's Volatile Files and Directories.
Feb  1 09:17:19 np0005604375 systemd[4302]: Listening on D-Bus User Message Bus Socket.
Feb  1 09:17:19 np0005604375 systemd[4302]: Reached target Sockets.
Feb  1 09:17:19 np0005604375 systemd[4302]: Reached target Basic System.
Feb  1 09:17:19 np0005604375 systemd[1]: Started User Manager for UID 1000.
Feb  1 09:17:19 np0005604375 systemd[4302]: Reached target Main User Target.
Feb  1 09:17:19 np0005604375 systemd[4302]: Startup finished in 123ms.
Feb  1 09:17:19 np0005604375 systemd[1]: Started Session 1 of User zuul.
Feb  1 09:17:20 np0005604375 python3[4384]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:17:23 np0005604375 python3[4412]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:17:28 np0005604375 python3[4470]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:17:29 np0005604375 python3[4510]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Feb  1 09:17:31 np0005604375 python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDOg3D/C5sT0sUANmCP2WkPymn7Ec8kER6Qfmso1GaCssVviPENWHfurW4D/9FZnZxpW6/BcjPRXXGGkqaEWbPYfCwONRlQsSb5sPPGoHZ4koyH23+e2Za22LNnaoq3YtLLTgB7UpJSnChaaRjquVHY5RvjfoxypufjOgc7RGV37rrZwTyu1e1Xjb8BKMzDgUy1GBMRMdGjz43DCGk20+T90IVXCtMaSkJuNAjiERMJBH0jhBo7wmJfpcL5ox8OQwV1yMsGjCVKxlTDeuVV18TEjxT/r6sKv1WbDNByANT6DZAAXl/d3JWo/+WLpl77QewiHt7s106MkLLeAWW8DnODSe5HkBfj5uqA8OowP81OV9abJBFhbtfrkjBvuxfkpNVezDbFW0NkJD1qemdFriQJwP9u4pQycLlhkIFjdc2uwFWWoxHsQmshHn9SXhJ8B5hEGRC+C+BQLXtEBNeMFJOIIzbv/Np1NMkVed/R/CUyryMVRpQqcIJdzuTOumrr6U= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:32 np0005604375 python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:32 np0005604375 python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:17:33 np0005604375 python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769955452.5373676-207-276159614298051/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=275cd86534264dd4b986e9685221be1c_id_rsa follow=False checksum=93121fb72603a63f689221ec5db13b84048b12b5 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:33 np0005604375 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  1 09:17:33 np0005604375 python3[4855]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:17:34 np0005604375 python3[4926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769955453.49118-240-130840043284461/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=275cd86534264dd4b986e9685221be1c_id_rsa.pub follow=False checksum=9da6b0c6916c9c03e8a5858dd9e4da44fef378ad backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:35 np0005604375 python3[4974]: ansible-ping Invoked with data=pong
Feb  1 09:17:36 np0005604375 python3[4998]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:17:38 np0005604375 python3[5056]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Feb  1 09:17:39 np0005604375 python3[5088]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:39 np0005604375 python3[5112]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:39 np0005604375 python3[5136]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:39 np0005604375 python3[5160]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:40 np0005604375 python3[5184]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:40 np0005604375 python3[5208]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:41 np0005604375 python3[5234]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:42 np0005604375 python3[5312]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:17:42 np0005604375 python3[5385]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769955461.9600563-21-50831400371360/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:43 np0005604375 python3[5433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:43 np0005604375 python3[5457]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:43 np0005604375 python3[5481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:44 np0005604375 python3[5505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:44 np0005604375 python3[5529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:44 np0005604375 python3[5553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:44 np0005604375 python3[5577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:45 np0005604375 python3[5601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:45 np0005604375 python3[5625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:47 np0005604375 python3[5649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:47 np0005604375 python3[5673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:47 np0005604375 python3[5697]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:48 np0005604375 python3[5721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:48 np0005604375 python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:48 np0005604375 python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:48 np0005604375 python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:49 np0005604375 python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:49 np0005604375 python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:49 np0005604375 python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:49 np0005604375 python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:50 np0005604375 python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:50 np0005604375 python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:50 np0005604375 python3[5961]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:50 np0005604375 python3[5985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:51 np0005604375 python3[6009]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:51 np0005604375 python3[6033]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:17:53 np0005604375 python3[6059]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb  1 09:17:53 np0005604375 systemd[1]: Starting Time & Date Service...
Feb  1 09:17:53 np0005604375 systemd[1]: Started Time & Date Service.
Feb  1 09:17:53 np0005604375 systemd-timedated[6061]: Changed time zone to 'UTC' (UTC).
Feb  1 09:17:54 np0005604375 python3[6090]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:55 np0005604375 python3[6166]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:17:55 np0005604375 python3[6237]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769955474.8666103-153-65205997949975/source _original_basename=tmpctrve2he follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:56 np0005604375 python3[6337]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:17:56 np0005604375 python3[6408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769955475.792549-183-151614907174427/source _original_basename=tmp3f3ih5xm follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:57 np0005604375 python3[6510]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:17:57 np0005604375 python3[6583]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769955476.9317107-231-197640845934383/source _original_basename=tmpp8f8afke follow=False checksum=315d925a1c7d27b381f3cae1546bdf6d57bfb104 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:58 np0005604375 python3[6631]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:17:58 np0005604375 python3[6657]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:17:58 np0005604375 python3[6737]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:17:59 np0005604375 python3[6810]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769955478.5363784-273-202277195219368/source _original_basename=tmp7jmm5fn9 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:17:59 np0005604375 python3[6861]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-2942-7cf9-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:18:00 np0005604375 python3[6889]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-2942-7cf9-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Feb  1 09:18:01 np0005604375 python3[6917]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:18:19 np0005604375 python3[6943]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:18:23 np0005604375 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb  1 09:18:52 np0005604375 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb  1 09:18:52 np0005604375 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Feb  1 09:18:52 np0005604375 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Feb  1 09:18:52 np0005604375 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Feb  1 09:18:52 np0005604375 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Feb  1 09:18:52 np0005604375 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Feb  1 09:18:52 np0005604375 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Feb  1 09:18:52 np0005604375 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Feb  1 09:18:52 np0005604375 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Feb  1 09:18:52 np0005604375 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Feb  1 09:18:52 np0005604375 NetworkManager[855]: <info>  [1769955532.2310] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb  1 09:18:52 np0005604375 systemd-udevd[6947]: Network interface NamePolicy= disabled on kernel command line.
Feb  1 09:18:52 np0005604375 NetworkManager[855]: <info>  [1769955532.2474] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:18:52 np0005604375 NetworkManager[855]: <info>  [1769955532.2499] settings: (eth1): created default wired connection 'Wired connection 1'
Feb  1 09:18:52 np0005604375 NetworkManager[855]: <info>  [1769955532.2501] device (eth1): carrier: link connected
Feb  1 09:18:52 np0005604375 NetworkManager[855]: <info>  [1769955532.2502] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb  1 09:18:52 np0005604375 NetworkManager[855]: <info>  [1769955532.2506] policy: auto-activating connection 'Wired connection 1' (91277a2e-344e-3388-a112-2b38838ac4e5)
Feb  1 09:18:52 np0005604375 NetworkManager[855]: <info>  [1769955532.2509] device (eth1): Activation: starting connection 'Wired connection 1' (91277a2e-344e-3388-a112-2b38838ac4e5)
Feb  1 09:18:52 np0005604375 NetworkManager[855]: <info>  [1769955532.2510] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:18:52 np0005604375 NetworkManager[855]: <info>  [1769955532.2511] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:18:52 np0005604375 NetworkManager[855]: <info>  [1769955532.2514] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:18:52 np0005604375 NetworkManager[855]: <info>  [1769955532.2516] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb  1 09:18:53 np0005604375 python3[6973]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-6553-8f61-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:19:03 np0005604375 python3[7053]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:19:03 np0005604375 python3[7126]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769955543.0591838-102-54971321909273/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=3be24a5af914606cc74cafdf80f44ef63ee45ba0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:19:04 np0005604375 python3[7176]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 09:19:04 np0005604375 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb  1 09:19:04 np0005604375 systemd[1]: Stopped Network Manager Wait Online.
Feb  1 09:19:04 np0005604375 systemd[1]: Stopping Network Manager Wait Online...
Feb  1 09:19:04 np0005604375 systemd[1]: Stopping Network Manager...
Feb  1 09:19:04 np0005604375 NetworkManager[855]: <info>  [1769955544.5064] caught SIGTERM, shutting down normally.
Feb  1 09:19:04 np0005604375 NetworkManager[855]: <info>  [1769955544.5071] dhcp4 (eth0): canceled DHCP transaction
Feb  1 09:19:04 np0005604375 NetworkManager[855]: <info>  [1769955544.5071] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  1 09:19:04 np0005604375 NetworkManager[855]: <info>  [1769955544.5071] dhcp4 (eth0): state changed no lease
Feb  1 09:19:04 np0005604375 NetworkManager[855]: <info>  [1769955544.5073] manager: NetworkManager state is now CONNECTING
Feb  1 09:19:04 np0005604375 NetworkManager[855]: <info>  [1769955544.5193] dhcp4 (eth1): canceled DHCP transaction
Feb  1 09:19:04 np0005604375 NetworkManager[855]: <info>  [1769955544.5193] dhcp4 (eth1): state changed no lease
Feb  1 09:19:04 np0005604375 NetworkManager[855]: <info>  [1769955544.5246] exiting (success)
Feb  1 09:19:04 np0005604375 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  1 09:19:04 np0005604375 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb  1 09:19:04 np0005604375 systemd[1]: Stopped Network Manager.
Feb  1 09:19:04 np0005604375 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  1 09:19:04 np0005604375 systemd[1]: Starting Network Manager...
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.5573] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:bc6eed0e-afac-49e7-b313-e00c329dc99a)
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.5576] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.5611] manager[0x56372075c000]: monitoring kernel firmware directory '/lib/firmware'.
Feb  1 09:19:04 np0005604375 systemd[1]: Starting Hostname Service...
Feb  1 09:19:04 np0005604375 systemd[1]: Started Hostname Service.
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6340] hostname: hostname: using hostnamed
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6340] hostname: static hostname changed from (none) to "np0005604375.novalocal"
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6348] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6353] manager[0x56372075c000]: rfkill: Wi-Fi hardware radio set enabled
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6353] manager[0x56372075c000]: rfkill: WWAN hardware radio set enabled
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6396] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6397] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6398] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6399] manager: Networking is enabled by state file
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6403] settings: Loaded settings plugin: keyfile (internal)
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6413] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6458] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6474] dhcp: init: Using DHCP client 'internal'
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6479] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6486] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6493] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6509] device (lo): Activation: starting connection 'lo' (993b83ea-ade5-4a5e-93d7-372f4fe03bae)
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6519] device (eth0): carrier: link connected
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6526] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6534] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6535] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6544] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6554] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6562] device (eth1): carrier: link connected
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6568] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6577] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (91277a2e-344e-3388-a112-2b38838ac4e5) (indicated)
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6577] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6585] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6596] device (eth1): Activation: starting connection 'Wired connection 1' (91277a2e-344e-3388-a112-2b38838ac4e5)
Feb  1 09:19:04 np0005604375 systemd[1]: Started Network Manager.
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6604] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6611] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6627] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6630] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6633] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6636] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6639] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6641] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6645] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6654] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6657] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6671] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6674] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6688] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6695] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6700] device (lo): Activation: successful, device activated.
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6708] dhcp4 (eth0): state changed new lease, address=38.102.83.238
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6714] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb  1 09:19:04 np0005604375 systemd[1]: Starting Network Manager Wait Online...
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6792] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6815] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6817] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6822] manager: NetworkManager state is now CONNECTED_SITE
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6828] device (eth0): Activation: successful, device activated.
Feb  1 09:19:04 np0005604375 NetworkManager[7185]: <info>  [1769955544.6837] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb  1 09:19:05 np0005604375 python3[7260]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-6553-8f61-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:19:14 np0005604375 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  1 09:19:34 np0005604375 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.8673] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  1 09:19:49 np0005604375 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  1 09:19:49 np0005604375 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9018] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9021] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9027] device (eth1): Activation: successful, device activated.
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9035] manager: startup complete
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9037] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <warn>  [1769955589.9042] device (eth1): Activation: failed for connection 'Wired connection 1'
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9050] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Feb  1 09:19:49 np0005604375 systemd[1]: Finished Network Manager Wait Online.
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9213] dhcp4 (eth1): canceled DHCP transaction
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9215] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9215] dhcp4 (eth1): state changed no lease
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9225] policy: auto-activating connection 'ci-private-network' (98bb363c-97f6-5419-a1f6-12d0df6ca2e0)
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9229] device (eth1): Activation: starting connection 'ci-private-network' (98bb363c-97f6-5419-a1f6-12d0df6ca2e0)
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9230] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9232] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9236] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9242] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9271] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9273] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:19:49 np0005604375 NetworkManager[7185]: <info>  [1769955589.9278] device (eth1): Activation: successful, device activated.
Feb  1 09:19:51 np0005604375 systemd[4302]: Starting Mark boot as successful...
Feb  1 09:19:51 np0005604375 systemd[4302]: Finished Mark boot as successful.
Feb  1 09:19:59 np0005604375 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  1 09:20:03 np0005604375 python3[7366]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:20:03 np0005604375 python3[7439]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769955603.0981352-267-107694781614546/source _original_basename=tmpbgzi0j16 follow=False checksum=7eace079e547e1278ba77819803b9809997a2a46 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:21:03 np0005604375 systemd-logind[786]: Session 1 logged out. Waiting for processes to exit.
Feb  1 09:22:51 np0005604375 systemd[4302]: Created slice User Background Tasks Slice.
Feb  1 09:22:51 np0005604375 systemd[4302]: Starting Cleanup of User's Temporary Files and Directories...
Feb  1 09:22:51 np0005604375 systemd[4302]: Finished Cleanup of User's Temporary Files and Directories.
Feb  1 09:26:44 np0005604375 systemd-logind[786]: New session 3 of user zuul.
Feb  1 09:26:44 np0005604375 systemd[1]: Started Session 3 of User zuul.
Feb  1 09:26:44 np0005604375 python3[7498]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-c26f-db18-000000002167-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:26:45 np0005604375 python3[7526]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:26:45 np0005604375 python3[7552]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:26:45 np0005604375 python3[7579]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:26:45 np0005604375 python3[7605]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:26:46 np0005604375 python3[7631]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:26:47 np0005604375 python3[7709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:26:47 np0005604375 python3[7782]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769956006.811156-494-278074563692936/source _original_basename=tmpdhrnq3pq follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:26:48 np0005604375 python3[7832]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  1 09:26:48 np0005604375 systemd[1]: Reloading.
Feb  1 09:26:48 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:26:50 np0005604375 python3[7888]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Feb  1 09:26:50 np0005604375 python3[7914]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:26:50 np0005604375 python3[7942]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:26:50 np0005604375 python3[7970]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:26:51 np0005604375 python3[7998]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:26:51 np0005604375 python3[8025]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-c26f-db18-00000000216e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:26:52 np0005604375 python3[8055]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  1 09:26:54 np0005604375 systemd[1]: session-3.scope: Deactivated successfully.
Feb  1 09:26:54 np0005604375 systemd[1]: session-3.scope: Consumed 3.477s CPU time.
Feb  1 09:26:54 np0005604375 systemd-logind[786]: Session 3 logged out. Waiting for processes to exit.
Feb  1 09:26:54 np0005604375 systemd-logind[786]: Removed session 3.
Feb  1 09:26:55 np0005604375 systemd-logind[786]: New session 4 of user zuul.
Feb  1 09:26:55 np0005604375 systemd[1]: Started Session 4 of User zuul.
Feb  1 09:26:55 np0005604375 python3[8088]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  1 09:27:01 np0005604375 setsebool[8127]: The virt_use_nfs policy boolean was changed to 1 by root
Feb  1 09:27:01 np0005604375 setsebool[8127]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Feb  1 09:27:11 np0005604375 kernel: SELinux:  Converting 385 SID table entries...
Feb  1 09:27:11 np0005604375 kernel: SELinux:  policy capability network_peer_controls=1
Feb  1 09:27:11 np0005604375 kernel: SELinux:  policy capability open_perms=1
Feb  1 09:27:11 np0005604375 kernel: SELinux:  policy capability extended_socket_class=1
Feb  1 09:27:11 np0005604375 kernel: SELinux:  policy capability always_check_network=0
Feb  1 09:27:11 np0005604375 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  1 09:27:11 np0005604375 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  1 09:27:11 np0005604375 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  1 09:27:20 np0005604375 kernel: SELinux:  Converting 388 SID table entries...
Feb  1 09:27:20 np0005604375 kernel: SELinux:  policy capability network_peer_controls=1
Feb  1 09:27:20 np0005604375 kernel: SELinux:  policy capability open_perms=1
Feb  1 09:27:20 np0005604375 kernel: SELinux:  policy capability extended_socket_class=1
Feb  1 09:27:20 np0005604375 kernel: SELinux:  policy capability always_check_network=0
Feb  1 09:27:20 np0005604375 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  1 09:27:20 np0005604375 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  1 09:27:20 np0005604375 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  1 09:27:37 np0005604375 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb  1 09:27:37 np0005604375 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  1 09:27:37 np0005604375 systemd[1]: Starting man-db-cache-update.service...
Feb  1 09:27:37 np0005604375 systemd[1]: Reloading.
Feb  1 09:27:37 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:27:37 np0005604375 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  1 09:27:48 np0005604375 python3[17083]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8732-6ace-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:27:49 np0005604375 kernel: evm: overlay not supported
Feb  1 09:27:49 np0005604375 systemd[4302]: Starting D-Bus User Message Bus...
Feb  1 09:27:49 np0005604375 dbus-broker-launch[17649]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb  1 09:27:49 np0005604375 dbus-broker-launch[17649]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb  1 09:27:49 np0005604375 systemd[4302]: Started D-Bus User Message Bus.
Feb  1 09:27:49 np0005604375 dbus-broker-lau[17649]: Ready
Feb  1 09:27:49 np0005604375 systemd[4302]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb  1 09:27:49 np0005604375 systemd[4302]: Created slice Slice /user.
Feb  1 09:27:49 np0005604375 systemd[4302]: podman-17581.scope: unit configures an IP firewall, but not running as root.
Feb  1 09:27:49 np0005604375 systemd[4302]: (This warning is only shown for the first unit using IP firewalling.)
Feb  1 09:27:49 np0005604375 systemd[4302]: Started podman-17581.scope.
Feb  1 09:27:49 np0005604375 systemd[4302]: Started podman-pause-3e251e21.scope.
Feb  1 09:27:50 np0005604375 python3[18060]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.219:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.219:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:27:50 np0005604375 python3[18060]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Feb  1 09:27:50 np0005604375 systemd[1]: session-4.scope: Deactivated successfully.
Feb  1 09:27:50 np0005604375 systemd[1]: session-4.scope: Consumed 39.262s CPU time.
Feb  1 09:27:50 np0005604375 systemd-logind[786]: Session 4 logged out. Waiting for processes to exit.
Feb  1 09:27:50 np0005604375 systemd-logind[786]: Removed session 4.
Feb  1 09:28:11 np0005604375 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  1 09:28:11 np0005604375 systemd[1]: Finished man-db-cache-update.service.
Feb  1 09:28:11 np0005604375 systemd[1]: man-db-cache-update.service: Consumed 38.715s CPU time.
Feb  1 09:28:11 np0005604375 systemd[1]: run-rfa7bfb4df0e24838b3ba88efed88c531.service: Deactivated successfully.
Feb  1 09:28:12 np0005604375 systemd-logind[786]: New session 5 of user zuul.
Feb  1 09:28:12 np0005604375 systemd[1]: Started Session 5 of User zuul.
Feb  1 09:28:12 np0005604375 python3[29666]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJTEnvHfA3HfXJBZL6COftw7wlOkNG3L9xY8it+Bi82MvcOrDXYPdlkNOv7Dds48b4NNwxcMKPs0qLhYP0ww/mQ= zuul@np0005604374.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:28:12 np0005604375 python3[29692]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJTEnvHfA3HfXJBZL6COftw7wlOkNG3L9xY8it+Bi82MvcOrDXYPdlkNOv7Dds48b4NNwxcMKPs0qLhYP0ww/mQ= zuul@np0005604374.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:28:12 np0005604375 irqbalance[781]: Cannot change IRQ 27 affinity: Operation not permitted
Feb  1 09:28:12 np0005604375 irqbalance[781]: IRQ 27 affinity is now unmanaged
Feb  1 09:28:13 np0005604375 python3[29718]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005604375.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb  1 09:28:13 np0005604375 python3[29752]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJTEnvHfA3HfXJBZL6COftw7wlOkNG3L9xY8it+Bi82MvcOrDXYPdlkNOv7Dds48b4NNwxcMKPs0qLhYP0ww/mQ= zuul@np0005604374.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  1 09:28:14 np0005604375 python3[29830]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:28:14 np0005604375 python3[29903]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769956093.996258-135-218452704060140/source _original_basename=tmpjfnzqhjg follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:28:15 np0005604375 python3[29953]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Feb  1 09:28:15 np0005604375 systemd[1]: Starting Hostname Service...
Feb  1 09:28:15 np0005604375 systemd[1]: Started Hostname Service.
Feb  1 09:28:15 np0005604375 systemd-hostnamed[29957]: Changed pretty hostname to 'compute-0'
Feb  1 09:28:15 np0005604375 systemd-hostnamed[29957]: Hostname set to <compute-0> (static)
Feb  1 09:28:15 np0005604375 NetworkManager[7185]: <info>  [1769956095.5706] hostname: static hostname changed from "np0005604375.novalocal" to "compute-0"
Feb  1 09:28:15 np0005604375 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  1 09:28:15 np0005604375 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  1 09:28:15 np0005604375 systemd[1]: session-5.scope: Deactivated successfully.
Feb  1 09:28:15 np0005604375 systemd[1]: session-5.scope: Consumed 2.018s CPU time.
Feb  1 09:28:15 np0005604375 systemd-logind[786]: Session 5 logged out. Waiting for processes to exit.
Feb  1 09:28:15 np0005604375 systemd-logind[786]: Removed session 5.
Feb  1 09:28:25 np0005604375 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  1 09:28:45 np0005604375 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  1 09:31:37 np0005604375 systemd-logind[786]: New session 6 of user zuul.
Feb  1 09:31:37 np0005604375 systemd[1]: Started Session 6 of User zuul.
Feb  1 09:31:38 np0005604375 python3[30055]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:31:39 np0005604375 python3[30171]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:31:39 np0005604375 python3[30244]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:31:40 np0005604375 python3[30270]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:31:40 np0005604375 python3[30343]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:31:40 np0005604375 python3[30369]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:31:40 np0005604375 python3[30442]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:31:41 np0005604375 python3[30468]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:31:41 np0005604375 python3[30541]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:31:41 np0005604375 python3[30567]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:31:41 np0005604375 python3[30640]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:31:41 np0005604375 python3[30666]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:31:42 np0005604375 python3[30739]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:31:42 np0005604375 python3[30765]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:31:42 np0005604375 python3[30838]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:31:53 np0005604375 python3[30896]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:32:41 np0005604375 systemd[1]: Starting Cleanup of Temporary Directories...
Feb  1 09:32:41 np0005604375 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Feb  1 09:32:41 np0005604375 systemd[1]: Finished Cleanup of Temporary Directories.
Feb  1 09:32:41 np0005604375 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Feb  1 09:36:52 np0005604375 systemd[1]: session-6.scope: Deactivated successfully.
Feb  1 09:36:52 np0005604375 systemd[1]: session-6.scope: Consumed 3.952s CPU time.
Feb  1 09:36:52 np0005604375 systemd-logind[786]: Session 6 logged out. Waiting for processes to exit.
Feb  1 09:36:52 np0005604375 systemd-logind[786]: Removed session 6.
Feb  1 09:42:39 np0005604375 systemd-logind[786]: New session 7 of user zuul.
Feb  1 09:42:39 np0005604375 systemd[1]: Started Session 7 of User zuul.
Feb  1 09:42:40 np0005604375 python3.9[31065]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:42:41 np0005604375 python3.9[31246]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:42:48 np0005604375 systemd[1]: session-7.scope: Deactivated successfully.
Feb  1 09:42:48 np0005604375 systemd[1]: session-7.scope: Consumed 7.158s CPU time.
Feb  1 09:42:48 np0005604375 systemd-logind[786]: Session 7 logged out. Waiting for processes to exit.
Feb  1 09:42:48 np0005604375 systemd-logind[786]: Removed session 7.
Feb  1 09:43:04 np0005604375 systemd-logind[786]: New session 8 of user zuul.
Feb  1 09:43:04 np0005604375 systemd[1]: Started Session 8 of User zuul.
Feb  1 09:43:05 np0005604375 python3.9[31456]: ansible-ansible.legacy.ping Invoked with data=pong
Feb  1 09:43:06 np0005604375 python3.9[31630]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:43:07 np0005604375 python3.9[31782]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:43:07 np0005604375 python3.9[31935]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:43:08 np0005604375 python3.9[32087]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:43:09 np0005604375 python3.9[32239]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:43:09 np0005604375 python3.9[32362]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769956988.7167428-68-207176494516641/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:43:10 np0005604375 python3.9[32514]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:43:11 np0005604375 python3.9[32670]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:43:11 np0005604375 python3.9[32822]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:43:12 np0005604375 python3.9[32972]: ansible-ansible.builtin.service_facts Invoked
Feb  1 09:43:15 np0005604375 python3.9[33225]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:43:15 np0005604375 python3.9[33375]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:43:17 np0005604375 python3.9[33529]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:43:18 np0005604375 python3.9[33687]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:43:18 np0005604375 python3.9[33771]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:43:59 np0005604375 systemd[1]: Reloading.
Feb  1 09:43:59 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:43:59 np0005604375 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Feb  1 09:43:59 np0005604375 systemd[1]: Reloading.
Feb  1 09:43:59 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:43:59 np0005604375 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Feb  1 09:43:59 np0005604375 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb  1 09:43:59 np0005604375 systemd[1]: Reloading.
Feb  1 09:43:59 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:44:00 np0005604375 systemd[1]: Listening on LVM2 poll daemon socket.
Feb  1 09:44:00 np0005604375 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb  1 09:44:00 np0005604375 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb  1 09:44:00 np0005604375 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb  1 09:44:52 np0005604375 kernel: SELinux:  Converting 2726 SID table entries...
Feb  1 09:44:52 np0005604375 kernel: SELinux:  policy capability network_peer_controls=1
Feb  1 09:44:52 np0005604375 kernel: SELinux:  policy capability open_perms=1
Feb  1 09:44:52 np0005604375 kernel: SELinux:  policy capability extended_socket_class=1
Feb  1 09:44:52 np0005604375 kernel: SELinux:  policy capability always_check_network=0
Feb  1 09:44:52 np0005604375 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  1 09:44:52 np0005604375 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  1 09:44:52 np0005604375 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  1 09:44:53 np0005604375 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Feb  1 09:44:53 np0005604375 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  1 09:44:53 np0005604375 systemd[1]: Starting man-db-cache-update.service...
Feb  1 09:44:53 np0005604375 systemd[1]: Reloading.
Feb  1 09:44:53 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:44:53 np0005604375 systemd[1]: Starting dnf makecache...
Feb  1 09:44:53 np0005604375 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  1 09:44:53 np0005604375 dnf[34408]: Failed determining last makecache time.
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-openstack-barbican-42b4c41831408a8e323  92 kB/s | 3.0 kB     00:00
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-python-glean-642fffe0203a8ffcc2443db52 150 kB/s | 3.0 kB     00:00
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-openstack-cinder-1c00d6490d88e436f26ef 145 kB/s | 3.0 kB     00:00
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-python-stevedore-c4acc5639fd2329372142 144 kB/s | 3.0 kB     00:00
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-python-cloudkitty-tests-tempest-783703 142 kB/s | 3.0 kB     00:00
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-diskimage-builder-61b717cc45660834fe9a 165 kB/s | 3.0 kB     00:00
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-openstack-nova-eaa65f0b85123a4ee343246 157 kB/s | 3.0 kB     00:00
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-python-designate-tests-tempest-347fdbc 149 kB/s | 3.0 kB     00:00
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-openstack-glance-1fd12c29b339f30fe823e 122 kB/s | 3.0 kB     00:00
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 120 kB/s | 3.0 kB     00:00
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-openstack-manila-d783d10e75495b73866db 119 kB/s | 3.0 kB     00:00
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-openstack-neutron-95cadbd379667c8520c8 128 kB/s | 3.0 kB     00:00
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-openstack-octavia-5975097dd4b021385178 120 kB/s | 3.0 kB     00:00
Feb  1 09:44:53 np0005604375 dnf[34408]: delorean-openstack-watcher-c014f81a8647287f6dcc 114 kB/s | 3.0 kB     00:00
Feb  1 09:44:54 np0005604375 dnf[34408]: delorean-python-tcib-78032d201b02cee27e8e644c61 132 kB/s | 3.0 kB     00:00
Feb  1 09:44:54 np0005604375 dnf[34408]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 124 kB/s | 3.0 kB     00:00
Feb  1 09:44:54 np0005604375 dnf[34408]: delorean-openstack-swift-dc98a8463506ac520c469a 133 kB/s | 3.0 kB     00:00
Feb  1 09:44:54 np0005604375 dnf[34408]: delorean-python-tempestconf-8515371b7cceebd4282 154 kB/s | 3.0 kB     00:00
Feb  1 09:44:54 np0005604375 dnf[34408]: delorean-openstack-heat-ui-013accbfd179753bc3f0 145 kB/s | 3.0 kB     00:00
Feb  1 09:44:54 np0005604375 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  1 09:44:54 np0005604375 systemd[1]: Finished man-db-cache-update.service.
Feb  1 09:44:54 np0005604375 systemd[1]: man-db-cache-update.service: Consumed 1.039s CPU time.
Feb  1 09:44:54 np0005604375 systemd[1]: run-rc940f10d34684257864d073cb96d4272.service: Deactivated successfully.
Feb  1 09:44:54 np0005604375 dnf[34408]: CentOS Stream 9 - BaseOS                         48 kB/s | 6.7 kB     00:00
Feb  1 09:44:54 np0005604375 python3.9[35308]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:44:54 np0005604375 dnf[34408]: CentOS Stream 9 - AppStream                      28 kB/s | 6.8 kB     00:00
Feb  1 09:44:54 np0005604375 dnf[34408]: CentOS Stream 9 - CRB                            69 kB/s | 6.6 kB     00:00
Feb  1 09:44:55 np0005604375 dnf[34408]: CentOS Stream 9 - Extras packages                32 kB/s | 7.3 kB     00:00
Feb  1 09:44:55 np0005604375 dnf[34408]: dlrn-antelope-testing                            88 kB/s | 3.0 kB     00:00
Feb  1 09:44:55 np0005604375 dnf[34408]: dlrn-antelope-build-deps                         91 kB/s | 3.0 kB     00:00
Feb  1 09:44:55 np0005604375 dnf[34408]: centos9-rabbitmq                                 95 kB/s | 3.0 kB     00:00
Feb  1 09:44:55 np0005604375 dnf[34408]: centos9-storage                                  34 kB/s | 3.0 kB     00:00
Feb  1 09:44:55 np0005604375 dnf[34408]: centos9-opstools                                 48 kB/s | 3.0 kB     00:00
Feb  1 09:44:55 np0005604375 dnf[34408]: NFV SIG OpenvSwitch                              33 kB/s | 3.0 kB     00:00
Feb  1 09:44:55 np0005604375 dnf[34408]: repo-setup-centos-appstream                     118 kB/s | 4.4 kB     00:00
Feb  1 09:44:55 np0005604375 dnf[34408]: repo-setup-centos-baseos                        174 kB/s | 3.9 kB     00:00
Feb  1 09:44:55 np0005604375 dnf[34408]: repo-setup-centos-highavailability              138 kB/s | 3.9 kB     00:00
Feb  1 09:44:55 np0005604375 dnf[34408]: repo-setup-centos-powertools                    205 kB/s | 4.3 kB     00:00
Feb  1 09:44:55 np0005604375 dnf[34408]: Extra Packages for Enterprise Linux 9 - x86_64  166 kB/s |  30 kB     00:00
Feb  1 09:44:56 np0005604375 python3.9[35611]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb  1 09:44:56 np0005604375 dnf[34408]: Metadata cache created.
Feb  1 09:44:56 np0005604375 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb  1 09:44:56 np0005604375 systemd[1]: Finished dnf makecache.
Feb  1 09:44:56 np0005604375 systemd[1]: dnf-makecache.service: Consumed 1.844s CPU time.
Feb  1 09:44:57 np0005604375 python3.9[35764]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb  1 09:44:59 np0005604375 python3.9[35917]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:45:00 np0005604375 python3.9[36069]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb  1 09:45:01 np0005604375 python3.9[36221]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:45:01 np0005604375 python3.9[36373]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:45:02 np0005604375 python3.9[36496]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957101.3966815-231-273310472300441/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:45:04 np0005604375 python3.9[36648]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:45:05 np0005604375 python3.9[36800]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:45:06 np0005604375 python3.9[36953]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:45:07 np0005604375 python3.9[37105]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb  1 09:45:07 np0005604375 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  1 09:45:07 np0005604375 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  1 09:45:08 np0005604375 python3.9[37259]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  1 09:45:08 np0005604375 python3.9[37417]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  1 09:45:09 np0005604375 python3.9[37577]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb  1 09:45:10 np0005604375 python3.9[37730]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  1 09:45:10 np0005604375 python3.9[37888]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb  1 09:45:11 np0005604375 python3.9[38040]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:45:13 np0005604375 python3.9[38193]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:45:14 np0005604375 python3.9[38345]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:45:14 np0005604375 python3.9[38468]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957113.634022-350-38713113703165/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:45:15 np0005604375 python3.9[38620]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 09:45:15 np0005604375 systemd[1]: Starting Load Kernel Modules...
Feb  1 09:45:15 np0005604375 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb  1 09:45:15 np0005604375 kernel: Bridge firewalling registered
Feb  1 09:45:15 np0005604375 systemd-modules-load[38624]: Inserted module 'br_netfilter'
Feb  1 09:45:15 np0005604375 systemd[1]: Finished Load Kernel Modules.
Feb  1 09:45:16 np0005604375 python3.9[38779]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:45:17 np0005604375 python3.9[38902]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957116.0176466-373-259226007622768/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:45:17 np0005604375 python3.9[39054]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:45:20 np0005604375 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb  1 09:45:20 np0005604375 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb  1 09:45:21 np0005604375 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  1 09:45:21 np0005604375 systemd[1]: Starting man-db-cache-update.service...
Feb  1 09:45:21 np0005604375 systemd[1]: Reloading.
Feb  1 09:45:21 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:45:21 np0005604375 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  1 09:45:22 np0005604375 python3.9[41011]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:45:23 np0005604375 python3.9[42319]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb  1 09:45:23 np0005604375 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  1 09:45:23 np0005604375 systemd[1]: Finished man-db-cache-update.service.
Feb  1 09:45:23 np0005604375 systemd[1]: man-db-cache-update.service: Consumed 2.758s CPU time.
Feb  1 09:45:23 np0005604375 systemd[1]: run-r6e341c3ff4d941d5b210cb8999349135.service: Deactivated successfully.
Feb  1 09:45:23 np0005604375 python3.9[43105]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:45:24 np0005604375 python3.9[43258]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:45:24 np0005604375 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb  1 09:45:24 np0005604375 systemd[1]: Starting Authorization Manager...
Feb  1 09:45:24 np0005604375 systemd[1]: Started Dynamic System Tuning Daemon.
Feb  1 09:45:24 np0005604375 polkitd[43475]: Started polkitd version 0.117
Feb  1 09:45:24 np0005604375 systemd[1]: Started Authorization Manager.
Feb  1 09:45:25 np0005604375 python3.9[43645]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:45:25 np0005604375 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb  1 09:45:25 np0005604375 systemd[1]: tuned.service: Deactivated successfully.
Feb  1 09:45:25 np0005604375 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb  1 09:45:25 np0005604375 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb  1 09:45:25 np0005604375 systemd[1]: Started Dynamic System Tuning Daemon.
Feb  1 09:45:26 np0005604375 python3.9[43807]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb  1 09:45:28 np0005604375 python3.9[43959]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:45:28 np0005604375 systemd[1]: Reloading.
Feb  1 09:45:28 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:45:29 np0005604375 python3.9[44148]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:45:29 np0005604375 systemd[1]: Reloading.
Feb  1 09:45:29 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:45:30 np0005604375 python3.9[44338]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:45:31 np0005604375 python3.9[44491]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:45:31 np0005604375 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Feb  1 09:45:31 np0005604375 python3.9[44644]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:45:33 np0005604375 python3.9[44806]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:45:34 np0005604375 python3.9[44959]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 09:45:34 np0005604375 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  1 09:45:34 np0005604375 systemd[1]: Stopped Apply Kernel Variables.
Feb  1 09:45:34 np0005604375 systemd[1]: Stopping Apply Kernel Variables...
Feb  1 09:45:34 np0005604375 systemd[1]: Starting Apply Kernel Variables...
Feb  1 09:45:34 np0005604375 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb  1 09:45:34 np0005604375 systemd[1]: Finished Apply Kernel Variables.
Feb  1 09:45:34 np0005604375 systemd[1]: session-8.scope: Deactivated successfully.
Feb  1 09:45:34 np0005604375 systemd[1]: session-8.scope: Consumed 1min 59.168s CPU time.
Feb  1 09:45:34 np0005604375 systemd-logind[786]: Session 8 logged out. Waiting for processes to exit.
Feb  1 09:45:34 np0005604375 systemd-logind[786]: Removed session 8.
Feb  1 09:45:40 np0005604375 systemd-logind[786]: New session 9 of user zuul.
Feb  1 09:45:40 np0005604375 systemd[1]: Started Session 9 of User zuul.
Feb  1 09:45:40 np0005604375 python3.9[45142]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:45:41 np0005604375 python3.9[45298]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb  1 09:45:42 np0005604375 python3.9[45451]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  1 09:45:43 np0005604375 python3.9[45609]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  1 09:45:44 np0005604375 python3.9[45769]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:45:45 np0005604375 python3.9[45853]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  1 09:45:48 np0005604375 python3.9[46017]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:45:58 np0005604375 kernel: SELinux:  Converting 2739 SID table entries...
Feb  1 09:45:58 np0005604375 kernel: SELinux:  policy capability network_peer_controls=1
Feb  1 09:45:58 np0005604375 kernel: SELinux:  policy capability open_perms=1
Feb  1 09:45:58 np0005604375 kernel: SELinux:  policy capability extended_socket_class=1
Feb  1 09:45:58 np0005604375 kernel: SELinux:  policy capability always_check_network=0
Feb  1 09:45:58 np0005604375 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  1 09:45:58 np0005604375 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  1 09:45:58 np0005604375 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  1 09:45:58 np0005604375 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Feb  1 09:45:58 np0005604375 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Feb  1 09:45:59 np0005604375 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  1 09:45:59 np0005604375 systemd[1]: Starting man-db-cache-update.service...
Feb  1 09:45:59 np0005604375 systemd[1]: Reloading.
Feb  1 09:45:59 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:45:59 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:45:59 np0005604375 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  1 09:46:00 np0005604375 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  1 09:46:00 np0005604375 systemd[1]: Finished man-db-cache-update.service.
Feb  1 09:46:00 np0005604375 systemd[1]: run-r67e3b1238b9e4b72bb6453438317428f.service: Deactivated successfully.
Feb  1 09:46:01 np0005604375 python3.9[47118]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  1 09:46:01 np0005604375 systemd[1]: Reloading.
Feb  1 09:46:01 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:46:01 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:46:01 np0005604375 systemd[1]: Starting Open vSwitch Database Unit...
Feb  1 09:46:01 np0005604375 chown[47160]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Feb  1 09:46:01 np0005604375 ovs-ctl[47165]: /etc/openvswitch/conf.db does not exist ... (warning).
Feb  1 09:46:01 np0005604375 ovs-ctl[47165]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Feb  1 09:46:01 np0005604375 ovs-ctl[47165]: Starting ovsdb-server [  OK  ]
Feb  1 09:46:01 np0005604375 ovs-vsctl[47214]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Feb  1 09:46:01 np0005604375 ovs-vsctl[47234]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c3bd6005-873a-4620-bb39-624ed33e90e2\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Feb  1 09:46:01 np0005604375 ovs-ctl[47165]: Configuring Open vSwitch system IDs [  OK  ]
Feb  1 09:46:01 np0005604375 ovs-ctl[47165]: Enabling remote OVSDB managers [  OK  ]
Feb  1 09:46:01 np0005604375 systemd[1]: Started Open vSwitch Database Unit.
Feb  1 09:46:01 np0005604375 ovs-vsctl[47240]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb  1 09:46:01 np0005604375 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Feb  1 09:46:02 np0005604375 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Feb  1 09:46:02 np0005604375 systemd[1]: Starting Open vSwitch Forwarding Unit...
Feb  1 09:46:02 np0005604375 kernel: openvswitch: Open vSwitch switching datapath
Feb  1 09:46:02 np0005604375 ovs-ctl[47284]: Inserting openvswitch module [  OK  ]
Feb  1 09:46:02 np0005604375 ovs-ctl[47253]: Starting ovs-vswitchd [  OK  ]
Feb  1 09:46:02 np0005604375 ovs-vsctl[47303]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb  1 09:46:02 np0005604375 ovs-ctl[47253]: Enabling remote OVSDB managers [  OK  ]
Feb  1 09:46:02 np0005604375 systemd[1]: Started Open vSwitch Forwarding Unit.
Feb  1 09:46:02 np0005604375 systemd[1]: Starting Open vSwitch...
Feb  1 09:46:02 np0005604375 systemd[1]: Finished Open vSwitch.
Feb  1 09:46:03 np0005604375 python3.9[47454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:46:04 np0005604375 python3.9[47606]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb  1 09:46:04 np0005604375 kernel: SELinux:  Converting 2753 SID table entries...
Feb  1 09:46:04 np0005604375 kernel: SELinux:  policy capability network_peer_controls=1
Feb  1 09:46:04 np0005604375 kernel: SELinux:  policy capability open_perms=1
Feb  1 09:46:04 np0005604375 kernel: SELinux:  policy capability extended_socket_class=1
Feb  1 09:46:04 np0005604375 kernel: SELinux:  policy capability always_check_network=0
Feb  1 09:46:04 np0005604375 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  1 09:46:04 np0005604375 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  1 09:46:04 np0005604375 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  1 09:46:05 np0005604375 python3.9[47761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:46:06 np0005604375 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Feb  1 09:46:06 np0005604375 python3.9[47919]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:46:08 np0005604375 python3.9[48072]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:46:10 np0005604375 python3.9[48359]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb  1 09:46:11 np0005604375 python3.9[48509]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:46:11 np0005604375 python3.9[48663]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:46:13 np0005604375 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  1 09:46:13 np0005604375 systemd[1]: Starting man-db-cache-update.service...
Feb  1 09:46:13 np0005604375 systemd[1]: Reloading.
Feb  1 09:46:13 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:46:13 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:46:13 np0005604375 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  1 09:46:13 np0005604375 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  1 09:46:13 np0005604375 systemd[1]: Finished man-db-cache-update.service.
Feb  1 09:46:13 np0005604375 systemd[1]: run-r266032b50d7449788b8ae9995d586317.service: Deactivated successfully.
Feb  1 09:46:14 np0005604375 python3.9[48980]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 09:46:14 np0005604375 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb  1 09:46:14 np0005604375 systemd[1]: Stopped Network Manager Wait Online.
Feb  1 09:46:14 np0005604375 systemd[1]: Stopping Network Manager Wait Online...
Feb  1 09:46:14 np0005604375 systemd[1]: Stopping Network Manager...
Feb  1 09:46:14 np0005604375 NetworkManager[7185]: <info>  [1769957174.5328] caught SIGTERM, shutting down normally.
Feb  1 09:46:14 np0005604375 NetworkManager[7185]: <info>  [1769957174.5339] dhcp4 (eth0): canceled DHCP transaction
Feb  1 09:46:14 np0005604375 NetworkManager[7185]: <info>  [1769957174.5340] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  1 09:46:14 np0005604375 NetworkManager[7185]: <info>  [1769957174.5340] dhcp4 (eth0): state changed no lease
Feb  1 09:46:14 np0005604375 NetworkManager[7185]: <info>  [1769957174.5342] manager: NetworkManager state is now CONNECTED_SITE
Feb  1 09:46:14 np0005604375 NetworkManager[7185]: <info>  [1769957174.5395] exiting (success)
Feb  1 09:46:14 np0005604375 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  1 09:46:14 np0005604375 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb  1 09:46:14 np0005604375 systemd[1]: Stopped Network Manager.
Feb  1 09:46:14 np0005604375 systemd[1]: NetworkManager.service: Consumed 11.980s CPU time, 4.1M memory peak, read 0B from disk, written 33.0K to disk.
Feb  1 09:46:14 np0005604375 systemd[1]: Starting Network Manager...
Feb  1 09:46:14 np0005604375 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.5812] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:bc6eed0e-afac-49e7-b313-e00c329dc99a)
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.5812] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.5851] manager[0x56043530f000]: monitoring kernel firmware directory '/lib/firmware'.
Feb  1 09:46:14 np0005604375 systemd[1]: Starting Hostname Service...
Feb  1 09:46:14 np0005604375 systemd[1]: Started Hostname Service.
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6800] hostname: hostname: using hostnamed
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6801] hostname: static hostname changed from (none) to "compute-0"
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6805] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6808] manager[0x56043530f000]: rfkill: Wi-Fi hardware radio set enabled
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6808] manager[0x56043530f000]: rfkill: WWAN hardware radio set enabled
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6826] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6833] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6834] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6834] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6834] manager: Networking is enabled by state file
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6836] settings: Loaded settings plugin: keyfile (internal)
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6839] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6856] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6861] dhcp: init: Using DHCP client 'internal'
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6863] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6866] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6869] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6875] device (lo): Activation: starting connection 'lo' (993b83ea-ade5-4a5e-93d7-372f4fe03bae)
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6880] device (eth0): carrier: link connected
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6883] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6887] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6887] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6892] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6897] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6900] device (eth1): carrier: link connected
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6903] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6907] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (98bb363c-97f6-5419-a1f6-12d0df6ca2e0) (indicated)
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6907] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6911] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6917] device (eth1): Activation: starting connection 'ci-private-network' (98bb363c-97f6-5419-a1f6-12d0df6ca2e0)
Feb  1 09:46:14 np0005604375 systemd[1]: Started Network Manager.
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6920] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6928] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6930] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6931] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6933] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6935] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6937] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6948] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6951] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6957] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6959] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6964] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6972] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6989] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6990] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.6994] device (lo): Activation: successful, device activated.
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.7007] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.7008] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.7011] manager: NetworkManager state is now CONNECTED_LOCAL
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.7012] device (eth1): Activation: successful, device activated.
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.7619] dhcp4 (eth0): state changed new lease, address=38.102.83.238
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.7625] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb  1 09:46:14 np0005604375 systemd[1]: Starting Network Manager Wait Online...
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.7674] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.7697] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.7698] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.7700] manager: NetworkManager state is now CONNECTED_SITE
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.7702] device (eth0): Activation: successful, device activated.
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.7705] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb  1 09:46:14 np0005604375 NetworkManager[48987]: <info>  [1769957174.7707] manager: startup complete
Feb  1 09:46:14 np0005604375 systemd[1]: Finished Network Manager Wait Online.
Feb  1 09:46:15 np0005604375 python3.9[49206]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:46:19 np0005604375 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  1 09:46:19 np0005604375 systemd[1]: Starting man-db-cache-update.service...
Feb  1 09:46:19 np0005604375 systemd[1]: Reloading.
Feb  1 09:46:19 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:46:19 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:46:19 np0005604375 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  1 09:46:19 np0005604375 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  1 09:46:19 np0005604375 systemd[1]: Finished man-db-cache-update.service.
Feb  1 09:46:19 np0005604375 systemd[1]: run-r59cdcf63ff774da69b38f9accf4c3fb6.service: Deactivated successfully.
Feb  1 09:46:20 np0005604375 python3.9[49665]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:46:21 np0005604375 python3.9[49817]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:46:21 np0005604375 python3.9[49971]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:46:22 np0005604375 python3.9[50123]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:46:23 np0005604375 python3.9[50275]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:46:23 np0005604375 python3.9[50427]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:46:24 np0005604375 python3.9[50579]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:46:24 np0005604375 python3.9[50702]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957183.7506762-224-179053541521192/.source _original_basename=.tbw7hkmw follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:46:24 np0005604375 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  1 09:46:25 np0005604375 python3.9[50855]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:46:25 np0005604375 python3.9[51007]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Feb  1 09:46:26 np0005604375 python3.9[51159]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:46:28 np0005604375 python3.9[51586]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Feb  1 09:46:29 np0005604375 ansible-async_wrapper.py[51761]: Invoked with j837567535167 300 /home/zuul/.ansible/tmp/ansible-tmp-1769957188.6270986-290-176120901093420/AnsiballZ_edpm_os_net_config.py _
Feb  1 09:46:29 np0005604375 ansible-async_wrapper.py[51764]: Starting module and watcher
Feb  1 09:46:29 np0005604375 ansible-async_wrapper.py[51764]: Start watching 51765 (300)
Feb  1 09:46:29 np0005604375 ansible-async_wrapper.py[51765]: Start module (51765)
Feb  1 09:46:29 np0005604375 ansible-async_wrapper.py[51761]: Return async_wrapper task started.
Feb  1 09:46:29 np0005604375 python3.9[51766]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Feb  1 09:46:30 np0005604375 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Feb  1 09:46:30 np0005604375 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Feb  1 09:46:30 np0005604375 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Feb  1 09:46:30 np0005604375 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Feb  1 09:46:30 np0005604375 kernel: cfg80211: failed to load regulatory.db
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.5728] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.5754] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6467] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6471] audit: op="connection-add" uuid="bb3c6b02-6650-44b1-b29e-a73688a7f962" name="br-ex-br" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6494] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6497] audit: op="connection-add" uuid="9ea40ce2-b169-446f-bdb0-6b894c24e30c" name="br-ex-port" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6516] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6519] audit: op="connection-add" uuid="e2adbb49-e2d3-43b6-86fa-16ac6b1b47ae" name="eth1-port" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6540] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6542] audit: op="connection-add" uuid="a49e08c5-32d1-4198-85f2-a0171be3d5a1" name="vlan20-port" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6557] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6559] audit: op="connection-add" uuid="b7755b43-8a91-4d3f-a7ba-7a331cd05355" name="vlan21-port" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6571] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6574] audit: op="connection-add" uuid="cab2260a-ed04-4d14-8a3b-3b49c1bea63e" name="vlan22-port" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6586] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6588] audit: op="connection-add" uuid="edb26f41-63cc-4950-9b08-0a4cf7ca45e4" name="vlan23-port" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6609] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6628] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6630] audit: op="connection-add" uuid="cc597100-9c89-42bf-8c8f-2fbabfb34bac" name="br-ex-if" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6676] audit: op="connection-update" uuid="98bb363c-97f6-5419-a1f6-12d0df6ca2e0" name="ci-private-network" args="connection.timestamp,connection.controller,connection.master,connection.port-type,connection.slave-type,ipv4.addresses,ipv4.dns,ipv4.method,ipv4.routes,ipv4.never-default,ipv4.routing-rules,ovs-interface.type,ipv6.addresses,ipv6.dns,ipv6.method,ipv6.addr-gen-mode,ipv6.routes,ipv6.routing-rules,ovs-external-ids.data" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6705] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6709] audit: op="connection-add" uuid="c829836a-8093-42c6-94fe-e2f2eb906a76" name="vlan20-if" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6739] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6742] audit: op="connection-add" uuid="44823588-c624-432d-897b-bf1351217920" name="vlan21-if" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6771] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6775] audit: op="connection-add" uuid="f6254eb2-3870-478c-8fa6-d72693ac70ed" name="vlan22-if" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6806] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6809] audit: op="connection-add" uuid="981ba108-96f7-41eb-9bfb-f97b212e521e" name="vlan23-if" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6830] audit: op="connection-delete" uuid="91277a2e-344e-3388-a112-2b38838ac4e5" name="Wired connection 1" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6852] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <warn>  [1769957191.6858] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6871] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6886] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (bb3c6b02-6650-44b1-b29e-a73688a7f962)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6887] audit: op="connection-activate" uuid="bb3c6b02-6650-44b1-b29e-a73688a7f962" name="br-ex-br" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6891] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <warn>  [1769957191.6892] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6903] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6910] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (9ea40ce2-b169-446f-bdb0-6b894c24e30c)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6915] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <warn>  [1769957191.6916] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6925] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6932] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (e2adbb49-e2d3-43b6-86fa-16ac6b1b47ae)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6937] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <warn>  [1769957191.6938] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6948] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6956] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (a49e08c5-32d1-4198-85f2-a0171be3d5a1)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6960] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <warn>  [1769957191.6961] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6973] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6980] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b7755b43-8a91-4d3f-a7ba-7a331cd05355)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6984] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <warn>  [1769957191.6986] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.6996] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7006] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (cab2260a-ed04-4d14-8a3b-3b49c1bea63e)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7010] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <warn>  [1769957191.7011] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7021] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7029] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (edb26f41-63cc-4950-9b08-0a4cf7ca45e4)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7030] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7036] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7040] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7054] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <warn>  [1769957191.7055] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7060] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7068] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (cc597100-9c89-42bf-8c8f-2fbabfb34bac)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7069] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7076] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7080] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7082] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7084] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7105] device (eth1): disconnecting for new activation request.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7106] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7111] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7115] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7116] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7122] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <warn>  [1769957191.7124] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7131] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7141] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (c829836a-8093-42c6-94fe-e2f2eb906a76)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7142] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7148] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7151] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7154] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7159] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <warn>  [1769957191.7161] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7168] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7175] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (44823588-c624-432d-897b-bf1351217920)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7176] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7183] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7187] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7190] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7195] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <warn>  [1769957191.7197] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7204] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7212] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (f6254eb2-3870-478c-8fa6-d72693ac70ed)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7214] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7219] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7222] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7224] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7229] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <warn>  [1769957191.7231] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7237] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7245] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (981ba108-96f7-41eb-9bfb-f97b212e521e)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7246] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7252] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7255] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7258] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Feb  1 09:46:31 np0005604375 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7261] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7287] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7292] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7299] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7302] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7316] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7325] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 kernel: ovs-system: entered promiscuous mode
Feb  1 09:46:31 np0005604375 kernel: Timeout policy base is empty
Feb  1 09:46:31 np0005604375 systemd-udevd[51772]: Network interface NamePolicy= disabled on kernel command line.
Feb  1 09:46:31 np0005604375 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7389] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7396] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7400] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7409] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7417] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7424] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7427] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7436] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7444] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7451] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7454] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7464] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7471] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7477] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7479] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7489] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7496] dhcp4 (eth0): canceled DHCP transaction
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7497] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7497] dhcp4 (eth0): state changed no lease
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7499] dhcp4 (eth0): activation: beginning transaction (no timeout)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7515] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7522] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51767 uid=0 result="fail" reason="Device is not activated"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7532] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7542] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb  1 09:46:31 np0005604375 kernel: br-ex: entered promiscuous mode
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7633] device (eth1): Activation: starting connection 'ci-private-network' (98bb363c-97f6-5419-a1f6-12d0df6ca2e0)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7640] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7642] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7646] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7648] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7650] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7653] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7656] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 kernel: vlan21: entered promiscuous mode
Feb  1 09:46:31 np0005604375 systemd-udevd[51771]: Network interface NamePolicy= disabled on kernel command line.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7667] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7673] dhcp4 (eth0): state changed new lease, address=38.102.83.238
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7688] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7694] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7699] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7703] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7709] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7712] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7716] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7719] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7722] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7726] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7729] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 kernel: vlan20: entered promiscuous mode
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7732] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7736] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7741] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7745] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7751] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7781] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7834] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7836] device (eth1): released from controller device eth1
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7844] device (eth1): disconnecting for new activation request.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7845] audit: op="connection-activate" uuid="98bb363c-97f6-5419-a1f6-12d0df6ca2e0" name="ci-private-network" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7851] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7868] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7873] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7875] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7896] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7903] device (eth1): Activation: starting connection 'ci-private-network' (98bb363c-97f6-5419-a1f6-12d0df6ca2e0)
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7906] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51767 uid=0 result="success"
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7928] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7932] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7938] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 kernel: vlan22: entered promiscuous mode
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7962] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7972] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7980] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7991] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7994] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.7997] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8002] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8015] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8025] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8029] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8036] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8042] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8050] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8057] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8064] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8074] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8084] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8087] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8092] device (eth1): Activation: successful, device activated.
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8099] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8100] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8106] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  1 09:46:31 np0005604375 kernel: vlan23: entered promiscuous mode
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8221] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8234] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8254] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8256] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  1 09:46:31 np0005604375 NetworkManager[48987]: <info>  [1769957191.8263] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  1 09:46:32 np0005604375 irqbalance[781]: Cannot change IRQ 26 affinity: Operation not permitted
Feb  1 09:46:32 np0005604375 irqbalance[781]: IRQ 26 affinity is now unmanaged
Feb  1 09:46:32 np0005604375 NetworkManager[48987]: <info>  [1769957192.9561] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51767 uid=0 result="success"
Feb  1 09:46:33 np0005604375 NetworkManager[48987]: <info>  [1769957193.1755] checkpoint[0x5604352e5950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Feb  1 09:46:33 np0005604375 NetworkManager[48987]: <info>  [1769957193.1757] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51767 uid=0 result="success"
Feb  1 09:46:33 np0005604375 python3.9[52131]: ansible-ansible.legacy.async_status Invoked with jid=j837567535167.51761 mode=status _async_dir=/root/.ansible_async
Feb  1 09:46:33 np0005604375 NetworkManager[48987]: <info>  [1769957193.5200] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51767 uid=0 result="success"
Feb  1 09:46:33 np0005604375 NetworkManager[48987]: <info>  [1769957193.5216] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51767 uid=0 result="success"
Feb  1 09:46:33 np0005604375 NetworkManager[48987]: <info>  [1769957193.7402] audit: op="networking-control" arg="global-dns-configuration" pid=51767 uid=0 result="success"
Feb  1 09:46:33 np0005604375 NetworkManager[48987]: <info>  [1769957193.7427] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Feb  1 09:46:33 np0005604375 NetworkManager[48987]: <info>  [1769957193.7489] audit: op="networking-control" arg="global-dns-configuration" pid=51767 uid=0 result="success"
Feb  1 09:46:33 np0005604375 NetworkManager[48987]: <info>  [1769957193.7516] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51767 uid=0 result="success"
Feb  1 09:46:33 np0005604375 NetworkManager[48987]: <info>  [1769957193.9035] checkpoint[0x5604352e5a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Feb  1 09:46:33 np0005604375 NetworkManager[48987]: <info>  [1769957193.9040] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51767 uid=0 result="success"
Feb  1 09:46:33 np0005604375 ansible-async_wrapper.py[51765]: Module complete (51765)
Feb  1 09:46:34 np0005604375 ansible-async_wrapper.py[51764]: Done in kid B.
Feb  1 09:46:36 np0005604375 python3.9[52237]: ansible-ansible.legacy.async_status Invoked with jid=j837567535167.51761 mode=status _async_dir=/root/.ansible_async
Feb  1 09:46:37 np0005604375 python3.9[52337]: ansible-ansible.legacy.async_status Invoked with jid=j837567535167.51761 mode=cleanup _async_dir=/root/.ansible_async
Feb  1 09:46:37 np0005604375 python3.9[52489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:46:38 np0005604375 python3.9[52612]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957197.2743666-317-82045283944133/.source.returncode _original_basename=.oksl7ea6 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:46:38 np0005604375 python3.9[52764]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:46:39 np0005604375 python3.9[52887]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957198.346142-333-261974950551774/.source.cfg _original_basename=.ja_blvls follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:46:39 np0005604375 python3.9[53039]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 09:46:39 np0005604375 systemd[1]: Reloading Network Manager...
Feb  1 09:46:39 np0005604375 NetworkManager[48987]: <info>  [1769957199.9861] audit: op="reload" arg="0" pid=53044 uid=0 result="success"
Feb  1 09:46:39 np0005604375 NetworkManager[48987]: <info>  [1769957199.9869] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Feb  1 09:46:39 np0005604375 systemd[1]: Reloaded Network Manager.
Feb  1 09:46:40 np0005604375 systemd[1]: session-9.scope: Deactivated successfully.
Feb  1 09:46:40 np0005604375 systemd[1]: session-9.scope: Consumed 42.469s CPU time.
Feb  1 09:46:40 np0005604375 systemd-logind[786]: Session 9 logged out. Waiting for processes to exit.
Feb  1 09:46:40 np0005604375 systemd-logind[786]: Removed session 9.
Feb  1 09:46:44 np0005604375 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  1 09:46:45 np0005604375 systemd-logind[786]: New session 10 of user zuul.
Feb  1 09:46:45 np0005604375 systemd[1]: Started Session 10 of User zuul.
Feb  1 09:46:46 np0005604375 python3.9[53230]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:46:47 np0005604375 python3.9[53385]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:46:48 np0005604375 python3.9[53578]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:46:48 np0005604375 systemd[1]: session-10.scope: Deactivated successfully.
Feb  1 09:46:48 np0005604375 systemd[1]: session-10.scope: Consumed 1.978s CPU time.
Feb  1 09:46:48 np0005604375 systemd-logind[786]: Session 10 logged out. Waiting for processes to exit.
Feb  1 09:46:48 np0005604375 systemd-logind[786]: Removed session 10.
Feb  1 09:46:50 np0005604375 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  1 09:46:53 np0005604375 systemd-logind[786]: New session 11 of user zuul.
Feb  1 09:46:53 np0005604375 systemd[1]: Started Session 11 of User zuul.
Feb  1 09:46:54 np0005604375 python3.9[53761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:46:55 np0005604375 python3.9[53915]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:46:56 np0005604375 python3.9[54072]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:46:57 np0005604375 python3.9[54156]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:46:59 np0005604375 python3.9[54309]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:47:00 np0005604375 python3.9[54505]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:01 np0005604375 python3.9[54657]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:47:01 np0005604375 systemd[1]: var-lib-containers-storage-overlay-compat3401655811-merged.mount: Deactivated successfully.
Feb  1 09:47:01 np0005604375 podman[54658]: 2026-02-01 14:47:01.202544725 +0000 UTC m=+0.049016496 system refresh
Feb  1 09:47:01 np0005604375 python3.9[54820]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:47:02 np0005604375 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  1 09:47:02 np0005604375 python3.9[54944]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957221.36314-74-107677092904576/.source.json follow=False _original_basename=podman_network_config.j2 checksum=df849b85257a814448226e82824bb3e704ca309b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:03 np0005604375 python3.9[55096]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:47:03 np0005604375 python3.9[55219]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957222.7425578-89-167218906879635/.source.conf follow=False _original_basename=registries.conf.j2 checksum=4ef81be63c2e12f99316ad95ffda51a525eb684e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:47:04 np0005604375 python3.9[55371]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:47:04 np0005604375 python3.9[55523]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:47:05 np0005604375 python3.9[55675]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:47:05 np0005604375 python3.9[55827]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:47:06 np0005604375 python3.9[55979]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:47:08 np0005604375 python3.9[56132]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:47:09 np0005604375 python3.9[56286]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:47:09 np0005604375 python3.9[56438]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:47:10 np0005604375 python3.9[56590]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:47:11 np0005604375 python3.9[56743]: ansible-service_facts Invoked
Feb  1 09:47:11 np0005604375 network[56760]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  1 09:47:11 np0005604375 network[56761]: 'network-scripts' will be removed from distribution in near future.
Feb  1 09:47:11 np0005604375 network[56762]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  1 09:47:16 np0005604375 python3.9[57214]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:47:18 np0005604375 python3.9[57367]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb  1 09:47:19 np0005604375 python3.9[57519]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:47:20 np0005604375 python3.9[57644]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957239.447651-233-246819591713782/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:21 np0005604375 python3.9[57798]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:47:21 np0005604375 python3.9[57923]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957240.6963396-248-108917328663792/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:22 np0005604375 python3.9[58077]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:23 np0005604375 python3.9[58231]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:47:24 np0005604375 python3.9[58315]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:47:25 np0005604375 python3.9[58469]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:47:26 np0005604375 python3.9[58553]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 09:47:26 np0005604375 chronyd[800]: chronyd exiting
Feb  1 09:47:26 np0005604375 systemd[1]: Stopping NTP client/server...
Feb  1 09:47:26 np0005604375 systemd[1]: chronyd.service: Deactivated successfully.
Feb  1 09:47:26 np0005604375 systemd[1]: Stopped NTP client/server.
Feb  1 09:47:26 np0005604375 systemd[1]: Starting NTP client/server...
Feb  1 09:47:26 np0005604375 chronyd[58562]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb  1 09:47:26 np0005604375 chronyd[58562]: Frequency -28.298 +/- 0.211 ppm read from /var/lib/chrony/drift
Feb  1 09:47:26 np0005604375 chronyd[58562]: Loaded seccomp filter (level 2)
Feb  1 09:47:26 np0005604375 systemd[1]: Started NTP client/server.
Feb  1 09:47:27 np0005604375 systemd[1]: session-11.scope: Deactivated successfully.
Feb  1 09:47:27 np0005604375 systemd[1]: session-11.scope: Consumed 22.408s CPU time.
Feb  1 09:47:27 np0005604375 systemd-logind[786]: Session 11 logged out. Waiting for processes to exit.
Feb  1 09:47:27 np0005604375 systemd-logind[786]: Removed session 11.
Feb  1 09:47:31 np0005604375 systemd-logind[786]: New session 12 of user zuul.
Feb  1 09:47:31 np0005604375 systemd[1]: Started Session 12 of User zuul.
Feb  1 09:47:32 np0005604375 python3.9[58743]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:33 np0005604375 python3.9[58895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:47:33 np0005604375 python3.9[59018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957252.7536201-29-15516203598004/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:34 np0005604375 systemd[1]: session-12.scope: Deactivated successfully.
Feb  1 09:47:34 np0005604375 systemd[1]: session-12.scope: Consumed 1.432s CPU time.
Feb  1 09:47:34 np0005604375 systemd-logind[786]: Session 12 logged out. Waiting for processes to exit.
Feb  1 09:47:34 np0005604375 systemd-logind[786]: Removed session 12.
Feb  1 09:47:40 np0005604375 systemd-logind[786]: New session 13 of user zuul.
Feb  1 09:47:40 np0005604375 systemd[1]: Started Session 13 of User zuul.
Feb  1 09:47:40 np0005604375 python3.9[59196]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:47:41 np0005604375 python3.9[59352]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:42 np0005604375 python3.9[59527]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:47:43 np0005604375 python3.9[59650]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769957262.084522-36-125584435900870/.source.json _original_basename=.apm2majj follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:43 np0005604375 python3.9[59802]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:47:44 np0005604375 python3.9[59925]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957263.5540307-59-214762095206510/.source _original_basename=.67ypkozx follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:45 np0005604375 python3.9[60077]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:47:45 np0005604375 python3.9[60229]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:47:46 np0005604375 python3.9[60352]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957265.2075205-83-134559704754399/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:47:46 np0005604375 python3.9[60504]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:47:47 np0005604375 python3.9[60627]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957266.327233-83-227979644496675/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:47:48 np0005604375 python3.9[60779]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:48 np0005604375 python3.9[60931]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:47:49 np0005604375 python3.9[61054]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957268.384963-120-233589864910945/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:49 np0005604375 python3.9[61206]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:47:50 np0005604375 python3.9[61329]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957269.4312303-135-124670674768825/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:51 np0005604375 python3.9[61481]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:47:51 np0005604375 systemd[1]: Reloading.
Feb  1 09:47:51 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:47:51 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:47:51 np0005604375 systemd[1]: Reloading.
Feb  1 09:47:51 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:47:51 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:47:51 np0005604375 systemd[1]: Starting EDPM Container Shutdown...
Feb  1 09:47:51 np0005604375 systemd[1]: Finished EDPM Container Shutdown.
Feb  1 09:47:52 np0005604375 python3.9[61709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:47:52 np0005604375 python3.9[61832]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957271.970541-158-117712906184099/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:53 np0005604375 python3.9[61984]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:47:53 np0005604375 python3.9[62107]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957272.9583347-173-11243544153189/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:47:54 np0005604375 python3.9[62259]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:47:54 np0005604375 systemd[1]: Reloading.
Feb  1 09:47:54 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:47:54 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:47:54 np0005604375 systemd[1]: Reloading.
Feb  1 09:47:54 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:47:54 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:47:54 np0005604375 systemd[1]: Starting Create netns directory...
Feb  1 09:47:55 np0005604375 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  1 09:47:55 np0005604375 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  1 09:47:55 np0005604375 systemd[1]: Finished Create netns directory.
Feb  1 09:47:55 np0005604375 python3.9[62485]: ansible-ansible.builtin.service_facts Invoked
Feb  1 09:47:55 np0005604375 network[62502]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  1 09:47:55 np0005604375 network[62503]: 'network-scripts' will be removed from distribution in near future.
Feb  1 09:47:55 np0005604375 network[62504]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  1 09:47:58 np0005604375 python3.9[62766]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:47:58 np0005604375 systemd[1]: Reloading.
Feb  1 09:47:58 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:47:58 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:47:58 np0005604375 systemd[1]: Stopping IPv4 firewall with iptables...
Feb  1 09:47:58 np0005604375 iptables.init[62806]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Feb  1 09:47:59 np0005604375 iptables.init[62806]: iptables: Flushing firewall rules: [  OK  ]
Feb  1 09:47:59 np0005604375 systemd[1]: iptables.service: Deactivated successfully.
Feb  1 09:47:59 np0005604375 systemd[1]: Stopped IPv4 firewall with iptables.
Feb  1 09:47:59 np0005604375 python3.9[63002]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:48:00 np0005604375 python3.9[63156]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:48:00 np0005604375 systemd[1]: Reloading.
Feb  1 09:48:00 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:48:00 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:48:00 np0005604375 systemd[1]: Starting Netfilter Tables...
Feb  1 09:48:00 np0005604375 systemd[1]: Finished Netfilter Tables.
Feb  1 09:48:01 np0005604375 python3.9[63347]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:48:02 np0005604375 python3.9[63500]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:48:02 np0005604375 python3.9[63625]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957281.9972432-242-24920536579898/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:03 np0005604375 python3.9[63778]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 09:48:03 np0005604375 systemd[1]: Reloading OpenSSH server daemon...
Feb  1 09:48:03 np0005604375 systemd[1]: Reloaded OpenSSH server daemon.
Feb  1 09:48:04 np0005604375 python3.9[63934]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:05 np0005604375 python3.9[64086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:48:05 np0005604375 python3.9[64209]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957284.6146722-273-55483130654784/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:06 np0005604375 python3.9[64361]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb  1 09:48:06 np0005604375 systemd[1]: Starting Time & Date Service...
Feb  1 09:48:06 np0005604375 systemd[1]: Started Time & Date Service.
Feb  1 09:48:07 np0005604375 python3.9[64517]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:08 np0005604375 python3.9[64669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:48:08 np0005604375 python3.9[64792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957287.6480298-308-99109954493205/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:09 np0005604375 python3.9[64944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:48:09 np0005604375 python3.9[65067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957288.7809994-323-164931460708408/.source.yaml _original_basename=.t7wpb3ot follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:10 np0005604375 python3.9[65219]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:48:10 np0005604375 python3.9[65342]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957289.7760112-338-56452916191340/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:11 np0005604375 python3.9[65494]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:48:12 np0005604375 python3.9[65647]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:48:12 np0005604375 python3[65800]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  1 09:48:13 np0005604375 python3.9[65952]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:48:14 np0005604375 python3.9[66075]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957293.1326847-377-14091338783412/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:14 np0005604375 python3.9[66227]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:48:15 np0005604375 python3.9[66350]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957294.266055-392-90358587094445/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:15 np0005604375 python3.9[66502]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:48:16 np0005604375 python3.9[66625]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957295.4859326-407-174117493731874/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:17 np0005604375 python3.9[66777]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:48:17 np0005604375 python3.9[66900]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957296.569769-422-151688404998444/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:18 np0005604375 python3.9[67052]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:48:18 np0005604375 python3.9[67175]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957297.675034-437-270657438241689/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:19 np0005604375 python3.9[67329]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:20 np0005604375 python3.9[67481]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:48:20 np0005604375 python3.9[67640]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:21 np0005604375 python3.9[67793]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:21 np0005604375 python3.9[67945]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:22 np0005604375 python3.9[68097]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  1 09:48:23 np0005604375 python3.9[68250]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  1 09:48:23 np0005604375 systemd[1]: session-13.scope: Deactivated successfully.
Feb  1 09:48:23 np0005604375 systemd[1]: session-13.scope: Consumed 31.270s CPU time.
Feb  1 09:48:23 np0005604375 systemd-logind[786]: Session 13 logged out. Waiting for processes to exit.
Feb  1 09:48:23 np0005604375 systemd-logind[786]: Removed session 13.
Feb  1 09:48:29 np0005604375 systemd-logind[786]: New session 14 of user zuul.
Feb  1 09:48:29 np0005604375 systemd[1]: Started Session 14 of User zuul.
Feb  1 09:48:30 np0005604375 python3.9[68431]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb  1 09:48:30 np0005604375 python3.9[68583]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:48:31 np0005604375 python3.9[68735]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:48:32 np0005604375 python3.9[68887]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCc91AYQnCiB0gaeezmTYoTbrfn13wkohxC7DIARmFIxyirGt426V9bgiFFpczr0aG/jVGnrXyqspzqVB5qhL9auJ/zaBQu1HuEMj/iSqvtp/5CDZvoCsolbRvc44zq2YNqAjmlgPQKe2f5MpaLGuLQIttz10Aj01eq50uvoj+Hccu0tBH2HrkQ6PphB9SaLI0ycAPr4B4WyPj9bCzJA9VYlxP6l4qkBqQjSDZLHnNDZP7N8pB38yfZB4EeE9v/ooH5aVJpDjV0Ciwtv4zQTv2W/HjYxaR9DsoVdVzUJKnzBZXW+kb2vE/A6rxP/+raWm+Z4jwydT2ZGCcAPe024SW6OUhi434WMJg15As435pj6vNzkfhYX2vPuIZed9Rue7qlD9kPRcg71YkvhFlja7MORqf5+fQtCfHTz9OakK3VATcSgFt4cP8UrBn+vqksDnD16t+njeWjWiJ84mM9yrOXBZblouKVTgDAkKsj+6dVItGIfTdsgn1Xo3eDknUU3Qk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM5PgjrlIGkEPCJJDOYu9tmd12o/4td87MoNHh6uIuRZ#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNnAPVuUouOEBJ57nPy2aB3GgfV4SpHa2H6A23QhOI4mJOPaen6XNPSxMMgeo9r5YMVaTTaE35iZ3Xh9PT0kwJ4=#012 create=True mode=0644 path=/tmp/ansible.3jo0zcqm state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:33 np0005604375 python3.9[69039]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.3jo0zcqm' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:48:34 np0005604375 python3.9[69193]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.3jo0zcqm state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:34 np0005604375 systemd[1]: session-14.scope: Deactivated successfully.
Feb  1 09:48:34 np0005604375 systemd[1]: session-14.scope: Consumed 2.885s CPU time.
Feb  1 09:48:34 np0005604375 systemd-logind[786]: Session 14 logged out. Waiting for processes to exit.
Feb  1 09:48:34 np0005604375 systemd-logind[786]: Removed session 14.
Feb  1 09:48:36 np0005604375 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb  1 09:48:39 np0005604375 systemd-logind[786]: New session 15 of user zuul.
Feb  1 09:48:39 np0005604375 systemd[1]: Started Session 15 of User zuul.
Feb  1 09:48:40 np0005604375 python3.9[69373]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:48:41 np0005604375 python3.9[69529]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb  1 09:48:42 np0005604375 python3.9[69683]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 09:48:42 np0005604375 python3.9[69836]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:48:43 np0005604375 python3.9[69989]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:48:44 np0005604375 python3.9[70143]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:48:45 np0005604375 python3.9[70298]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:48:45 np0005604375 systemd[1]: session-15.scope: Deactivated successfully.
Feb  1 09:48:45 np0005604375 systemd[1]: session-15.scope: Consumed 4.050s CPU time.
Feb  1 09:48:45 np0005604375 systemd-logind[786]: Session 15 logged out. Waiting for processes to exit.
Feb  1 09:48:45 np0005604375 systemd-logind[786]: Removed session 15.
Feb  1 09:48:50 np0005604375 systemd-logind[786]: New session 16 of user zuul.
Feb  1 09:48:50 np0005604375 systemd[1]: Started Session 16 of User zuul.
Feb  1 09:48:51 np0005604375 python3.9[70476]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:48:52 np0005604375 python3.9[70632]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:48:53 np0005604375 python3.9[70716]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  1 09:48:55 np0005604375 python3.9[70867]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:48:56 np0005604375 python3.9[71018]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  1 09:48:57 np0005604375 python3.9[71168]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:48:57 np0005604375 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  1 09:48:57 np0005604375 python3.9[71319]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:48:58 np0005604375 systemd[1]: session-16.scope: Deactivated successfully.
Feb  1 09:48:58 np0005604375 systemd[1]: session-16.scope: Consumed 5.370s CPU time.
Feb  1 09:48:58 np0005604375 systemd-logind[786]: Session 16 logged out. Waiting for processes to exit.
Feb  1 09:48:58 np0005604375 systemd-logind[786]: Removed session 16.
Feb  1 09:49:04 np0005604375 systemd-logind[786]: New session 17 of user zuul.
Feb  1 09:49:04 np0005604375 systemd[1]: Started Session 17 of User zuul.
Feb  1 09:49:09 np0005604375 python3[72085]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:49:11 np0005604375 python3[72180]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  1 09:49:12 np0005604375 python3[72207]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  1 09:49:12 np0005604375 python3[72233]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:49:12 np0005604375 kernel: loop: module loaded
Feb  1 09:49:12 np0005604375 kernel: loop3: detected capacity change from 0 to 41943040
Feb  1 09:49:13 np0005604375 python3[72268]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:49:13 np0005604375 lvm[72271]: PV /dev/loop3 not used.
Feb  1 09:49:13 np0005604375 lvm[72280]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:49:13 np0005604375 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Feb  1 09:49:13 np0005604375 lvm[72282]:  1 logical volume(s) in volume group "ceph_vg0" now active
Feb  1 09:49:13 np0005604375 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Feb  1 09:49:13 np0005604375 python3[72360]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:49:14 np0005604375 python3[72433]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957353.4675407-36131-147630417988920/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:49:14 np0005604375 python3[72483]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:49:14 np0005604375 systemd[1]: Reloading.
Feb  1 09:49:14 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:49:14 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:49:14 np0005604375 systemd[1]: Starting Ceph OSD losetup...
Feb  1 09:49:14 np0005604375 bash[72523]: /dev/loop3: [64513]:4329562 (/var/lib/ceph-osd-0.img)
Feb  1 09:49:14 np0005604375 systemd[1]: Finished Ceph OSD losetup.
Feb  1 09:49:14 np0005604375 lvm[72524]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:49:14 np0005604375 lvm[72524]: VG ceph_vg0 finished
Feb  1 09:49:15 np0005604375 python3[72550]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  1 09:49:16 np0005604375 python3[72577]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  1 09:49:16 np0005604375 python3[72603]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:49:16 np0005604375 kernel: loop4: detected capacity change from 0 to 41943040
Feb  1 09:49:17 np0005604375 python3[72635]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:49:17 np0005604375 lvm[72638]: PV /dev/loop4 not used.
Feb  1 09:49:17 np0005604375 lvm[72648]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:49:17 np0005604375 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Feb  1 09:49:17 np0005604375 lvm[72650]:  1 logical volume(s) in volume group "ceph_vg1" now active
Feb  1 09:49:17 np0005604375 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Feb  1 09:49:17 np0005604375 python3[72728]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:49:18 np0005604375 python3[72801]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957357.6285377-36158-254530882717440/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:49:18 np0005604375 python3[72851]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:49:18 np0005604375 systemd[1]: Reloading.
Feb  1 09:49:18 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:49:18 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:49:18 np0005604375 systemd[1]: Starting Ceph OSD losetup...
Feb  1 09:49:18 np0005604375 bash[72891]: /dev/loop4: [64513]:4356750 (/var/lib/ceph-osd-1.img)
Feb  1 09:49:18 np0005604375 systemd[1]: Finished Ceph OSD losetup.
Feb  1 09:49:18 np0005604375 lvm[72892]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:49:18 np0005604375 lvm[72892]: VG ceph_vg1 finished
Feb  1 09:49:19 np0005604375 python3[72918]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  1 09:49:20 np0005604375 python3[72945]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  1 09:49:21 np0005604375 python3[72971]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:49:21 np0005604375 kernel: loop5: detected capacity change from 0 to 41943040
Feb  1 09:49:21 np0005604375 python3[73003]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:49:21 np0005604375 lvm[73006]: PV /dev/loop5 not used.
Feb  1 09:49:21 np0005604375 lvm[73016]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:49:21 np0005604375 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Feb  1 09:49:21 np0005604375 lvm[73018]:  1 logical volume(s) in volume group "ceph_vg2" now active
Feb  1 09:49:21 np0005604375 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Feb  1 09:49:21 np0005604375 python3[73096]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:49:22 np0005604375 python3[73169]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957361.7326322-36185-123441704867521/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:49:22 np0005604375 python3[73219]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:49:22 np0005604375 systemd[1]: Reloading.
Feb  1 09:49:22 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:49:22 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:49:23 np0005604375 systemd[1]: Starting Ceph OSD losetup...
Feb  1 09:49:23 np0005604375 bash[73259]: /dev/loop5: [64513]:4356753 (/var/lib/ceph-osd-2.img)
Feb  1 09:49:23 np0005604375 systemd[1]: Finished Ceph OSD losetup.
Feb  1 09:49:23 np0005604375 lvm[73260]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:49:23 np0005604375 lvm[73260]: VG ceph_vg2 finished
Feb  1 09:49:24 np0005604375 python3[73284]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:49:26 np0005604375 python3[73377]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  1 09:49:29 np0005604375 python3[73435]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  1 09:49:31 np0005604375 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  1 09:49:31 np0005604375 systemd[1]: Starting man-db-cache-update.service...
Feb  1 09:49:31 np0005604375 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  1 09:49:31 np0005604375 systemd[1]: Finished man-db-cache-update.service.
Feb  1 09:49:31 np0005604375 systemd[1]: run-r7d3447a9278746b3b4366efa4a157989.service: Deactivated successfully.
Feb  1 09:49:32 np0005604375 python3[73554]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  1 09:49:32 np0005604375 python3[73582]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:49:32 np0005604375 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  1 09:49:33 np0005604375 python3[73622]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:49:33 np0005604375 python3[73648]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:49:34 np0005604375 python3[73726]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:49:34 np0005604375 python3[73799]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957373.9692478-36334-30023732298472/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:49:35 np0005604375 python3[73901]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:49:35 np0005604375 python3[73974]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957375.049498-36352-64714602929366/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:49:36 np0005604375 python3[74024]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  1 09:49:36 np0005604375 chronyd[58562]: Selected source 198.50.174.203 (pool.ntp.org)
Feb  1 09:49:36 np0005604375 python3[74052]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  1 09:49:36 np0005604375 python3[74080]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  1 09:49:36 np0005604375 python3[74106]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  1 09:49:37 np0005604375 python3[74132]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:49:37 np0005604375 systemd-logind[786]: New session 18 of user ceph-admin.
Feb  1 09:49:37 np0005604375 systemd[1]: Created slice User Slice of UID 42477.
Feb  1 09:49:37 np0005604375 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb  1 09:49:37 np0005604375 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb  1 09:49:37 np0005604375 systemd[1]: Starting User Manager for UID 42477...
Feb  1 09:49:37 np0005604375 systemd[74140]: Queued start job for default target Main User Target.
Feb  1 09:49:37 np0005604375 systemd[74140]: Created slice User Application Slice.
Feb  1 09:49:37 np0005604375 systemd[74140]: Started Mark boot as successful after the user session has run 2 minutes.
Feb  1 09:49:37 np0005604375 systemd[74140]: Started Daily Cleanup of User's Temporary Directories.
Feb  1 09:49:37 np0005604375 systemd[74140]: Reached target Paths.
Feb  1 09:49:37 np0005604375 systemd[74140]: Reached target Timers.
Feb  1 09:49:37 np0005604375 systemd[74140]: Starting D-Bus User Message Bus Socket...
Feb  1 09:49:37 np0005604375 systemd[74140]: Starting Create User's Volatile Files and Directories...
Feb  1 09:49:37 np0005604375 systemd[74140]: Listening on D-Bus User Message Bus Socket.
Feb  1 09:49:37 np0005604375 systemd[74140]: Reached target Sockets.
Feb  1 09:49:37 np0005604375 systemd[74140]: Finished Create User's Volatile Files and Directories.
Feb  1 09:49:37 np0005604375 systemd[74140]: Reached target Basic System.
Feb  1 09:49:37 np0005604375 systemd[74140]: Reached target Main User Target.
Feb  1 09:49:37 np0005604375 systemd[74140]: Startup finished in 124ms.
Feb  1 09:49:37 np0005604375 systemd[1]: Started User Manager for UID 42477.
Feb  1 09:49:37 np0005604375 systemd[1]: Started Session 18 of User ceph-admin.
Feb  1 09:49:37 np0005604375 systemd[1]: session-18.scope: Deactivated successfully.
Feb  1 09:49:37 np0005604375 systemd-logind[786]: Session 18 logged out. Waiting for processes to exit.
Feb  1 09:49:37 np0005604375 systemd-logind[786]: Removed session 18.
Feb  1 09:49:37 np0005604375 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  1 09:49:38 np0005604375 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  1 09:49:39 np0005604375 systemd[1]: var-lib-containers-storage-overlay-compat4250495151-merged.mount: Deactivated successfully.
Feb  1 09:49:40 np0005604375 systemd[1]: var-lib-containers-storage-overlay-compat4250495151-lower\x2dmapped.mount: Deactivated successfully.
Feb  1 09:49:48 np0005604375 systemd[1]: Stopping User Manager for UID 42477...
Feb  1 09:49:48 np0005604375 systemd[74140]: Activating special unit Exit the Session...
Feb  1 09:49:48 np0005604375 systemd[74140]: Stopped target Main User Target.
Feb  1 09:49:48 np0005604375 systemd[74140]: Stopped target Basic System.
Feb  1 09:49:48 np0005604375 systemd[74140]: Stopped target Paths.
Feb  1 09:49:48 np0005604375 systemd[74140]: Stopped target Sockets.
Feb  1 09:49:48 np0005604375 systemd[74140]: Stopped target Timers.
Feb  1 09:49:48 np0005604375 systemd[74140]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb  1 09:49:48 np0005604375 systemd[74140]: Stopped Daily Cleanup of User's Temporary Directories.
Feb  1 09:49:48 np0005604375 systemd[74140]: Closed D-Bus User Message Bus Socket.
Feb  1 09:49:48 np0005604375 systemd[74140]: Stopped Create User's Volatile Files and Directories.
Feb  1 09:49:48 np0005604375 systemd[74140]: Removed slice User Application Slice.
Feb  1 09:49:48 np0005604375 systemd[74140]: Reached target Shutdown.
Feb  1 09:49:48 np0005604375 systemd[74140]: Finished Exit the Session.
Feb  1 09:49:48 np0005604375 systemd[74140]: Reached target Exit the Session.
Feb  1 09:49:48 np0005604375 systemd[1]: user@42477.service: Deactivated successfully.
Feb  1 09:49:48 np0005604375 systemd[1]: Stopped User Manager for UID 42477.
Feb  1 09:49:48 np0005604375 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Feb  1 09:49:48 np0005604375 systemd[1]: run-user-42477.mount: Deactivated successfully.
Feb  1 09:49:48 np0005604375 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Feb  1 09:49:48 np0005604375 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Feb  1 09:49:48 np0005604375 systemd[1]: Removed slice User Slice of UID 42477.
Feb  1 09:49:54 np0005604375 podman[74233]: 2026-02-01 14:49:54.931854271 +0000 UTC m=+16.758031538 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:49:54 np0005604375 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  1 09:49:54 np0005604375 podman[74292]: 2026-02-01 14:49:54.978008006 +0000 UTC m=+0.031479711 container create 8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9 (image=quay.io/ceph/ceph:v20, name=infallible_einstein, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:49:55 np0005604375 systemd[1]: Created slice Virtual Machine and Container Slice.
Feb  1 09:49:55 np0005604375 systemd[1]: Started libpod-conmon-8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9.scope.
Feb  1 09:49:55 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:49:55 np0005604375 podman[74292]: 2026-02-01 14:49:54.964881435 +0000 UTC m=+0.018353150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:49:55 np0005604375 podman[74292]: 2026-02-01 14:49:55.065775439 +0000 UTC m=+0.119247184 container init 8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9 (image=quay.io/ceph/ceph:v20, name=infallible_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:49:55 np0005604375 podman[74292]: 2026-02-01 14:49:55.070976337 +0000 UTC m=+0.124448092 container start 8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9 (image=quay.io/ceph/ceph:v20, name=infallible_einstein, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:49:55 np0005604375 podman[74292]: 2026-02-01 14:49:55.075221677 +0000 UTC m=+0.128693462 container attach 8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9 (image=quay.io/ceph/ceph:v20, name=infallible_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Feb  1 09:49:55 np0005604375 infallible_einstein[74308]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb  1 09:49:55 np0005604375 systemd[1]: libpod-8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9.scope: Deactivated successfully.
Feb  1 09:49:55 np0005604375 podman[74313]: 2026-02-01 14:49:55.220544759 +0000 UTC m=+0.024803043 container died 8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9 (image=quay.io/ceph/ceph:v20, name=infallible_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 09:49:55 np0005604375 systemd[1]: var-lib-containers-storage-overlay-15c1e7f9befd4c74060c625f7a99436fe58d38b367e3c4e55aca45ece74faa65-merged.mount: Deactivated successfully.
Feb  1 09:49:55 np0005604375 podman[74313]: 2026-02-01 14:49:55.25698634 +0000 UTC m=+0.061244614 container remove 8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9 (image=quay.io/ceph/ceph:v20, name=infallible_einstein, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:49:55 np0005604375 systemd[1]: libpod-conmon-8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9.scope: Deactivated successfully.
Feb  1 09:49:55 np0005604375 podman[74328]: 2026-02-01 14:49:55.323254155 +0000 UTC m=+0.045302993 container create f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13 (image=quay.io/ceph/ceph:v20, name=jolly_northcutt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  1 09:49:55 np0005604375 systemd[1]: Started libpod-conmon-f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13.scope.
Feb  1 09:49:55 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:49:55 np0005604375 podman[74328]: 2026-02-01 14:49:55.385491886 +0000 UTC m=+0.107540744 container init f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13 (image=quay.io/ceph/ceph:v20, name=jolly_northcutt, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:49:55 np0005604375 podman[74328]: 2026-02-01 14:49:55.392390412 +0000 UTC m=+0.114439250 container start f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13 (image=quay.io/ceph/ceph:v20, name=jolly_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  1 09:49:55 np0005604375 jolly_northcutt[74344]: 167 167
Feb  1 09:49:55 np0005604375 systemd[1]: libpod-f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13.scope: Deactivated successfully.
Feb  1 09:49:55 np0005604375 podman[74328]: 2026-02-01 14:49:55.396703074 +0000 UTC m=+0.118751962 container attach f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13 (image=quay.io/ceph/ceph:v20, name=jolly_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb  1 09:49:55 np0005604375 podman[74328]: 2026-02-01 14:49:55.397158657 +0000 UTC m=+0.119207505 container died f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13 (image=quay.io/ceph/ceph:v20, name=jolly_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  1 09:49:55 np0005604375 podman[74328]: 2026-02-01 14:49:55.304554426 +0000 UTC m=+0.026603264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:49:55 np0005604375 podman[74328]: 2026-02-01 14:49:55.431020855 +0000 UTC m=+0.153069663 container remove f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13 (image=quay.io/ceph/ceph:v20, name=jolly_northcutt, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  1 09:49:55 np0005604375 systemd[1]: libpod-conmon-f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13.scope: Deactivated successfully.
Feb  1 09:49:55 np0005604375 podman[74361]: 2026-02-01 14:49:55.480828194 +0000 UTC m=+0.036430632 container create 9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc (image=quay.io/ceph/ceph:v20, name=competent_newton, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  1 09:49:55 np0005604375 systemd[1]: Started libpod-conmon-9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc.scope.
Feb  1 09:49:55 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:49:55 np0005604375 podman[74361]: 2026-02-01 14:49:55.545923996 +0000 UTC m=+0.101526454 container init 9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc (image=quay.io/ceph/ceph:v20, name=competent_newton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:49:55 np0005604375 podman[74361]: 2026-02-01 14:49:55.551148414 +0000 UTC m=+0.106750862 container start 9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc (image=quay.io/ceph/ceph:v20, name=competent_newton, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  1 09:49:55 np0005604375 podman[74361]: 2026-02-01 14:49:55.555153797 +0000 UTC m=+0.110756285 container attach 9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc (image=quay.io/ceph/ceph:v20, name=competent_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 09:49:55 np0005604375 podman[74361]: 2026-02-01 14:49:55.461442516 +0000 UTC m=+0.017044994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:49:55 np0005604375 competent_newton[74377]: AQATaH9piNQ8IhAAOrkahw461D5iBEXuZK7gdA==
Feb  1 09:49:55 np0005604375 systemd[1]: libpod-9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc.scope: Deactivated successfully.
Feb  1 09:49:55 np0005604375 podman[74361]: 2026-02-01 14:49:55.578115457 +0000 UTC m=+0.133717905 container died 9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc (image=quay.io/ceph/ceph:v20, name=competent_newton, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  1 09:49:55 np0005604375 podman[74361]: 2026-02-01 14:49:55.609371202 +0000 UTC m=+0.164973660 container remove 9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc (image=quay.io/ceph/ceph:v20, name=competent_newton, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 09:49:55 np0005604375 systemd[1]: libpod-conmon-9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc.scope: Deactivated successfully.
Feb  1 09:49:55 np0005604375 podman[74397]: 2026-02-01 14:49:55.668511805 +0000 UTC m=+0.045004854 container create bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1 (image=quay.io/ceph/ceph:v20, name=reverent_turing, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:49:55 np0005604375 systemd[1]: Started libpod-conmon-bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1.scope.
Feb  1 09:49:55 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:49:55 np0005604375 podman[74397]: 2026-02-01 14:49:55.728131852 +0000 UTC m=+0.104624941 container init bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1 (image=quay.io/ceph/ceph:v20, name=reverent_turing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  1 09:49:55 np0005604375 podman[74397]: 2026-02-01 14:49:55.732064073 +0000 UTC m=+0.108557152 container start bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1 (image=quay.io/ceph/ceph:v20, name=reverent_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:49:55 np0005604375 podman[74397]: 2026-02-01 14:49:55.736823188 +0000 UTC m=+0.113316267 container attach bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1 (image=quay.io/ceph/ceph:v20, name=reverent_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  1 09:49:55 np0005604375 podman[74397]: 2026-02-01 14:49:55.646231125 +0000 UTC m=+0.022724224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:49:55 np0005604375 reverent_turing[74413]: AQATaH9pJBF8LRAABIDa+8Sbw/MmLGIqYlu/JQ==
Feb  1 09:49:55 np0005604375 systemd[1]: libpod-bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1.scope: Deactivated successfully.
Feb  1 09:49:55 np0005604375 podman[74397]: 2026-02-01 14:49:55.766462697 +0000 UTC m=+0.142955776 container died bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1 (image=quay.io/ceph/ceph:v20, name=reverent_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  1 09:49:55 np0005604375 podman[74397]: 2026-02-01 14:49:55.810553484 +0000 UTC m=+0.187046573 container remove bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1 (image=quay.io/ceph/ceph:v20, name=reverent_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:49:55 np0005604375 systemd[1]: libpod-conmon-bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1.scope: Deactivated successfully.
Feb  1 09:49:55 np0005604375 podman[74431]: 2026-02-01 14:49:55.873558807 +0000 UTC m=+0.049511502 container create 2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888 (image=quay.io/ceph/ceph:v20, name=clever_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  1 09:49:55 np0005604375 systemd[1]: Started libpod-conmon-2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888.scope.
Feb  1 09:49:55 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:49:55 np0005604375 podman[74431]: 2026-02-01 14:49:55.847288594 +0000 UTC m=+0.023241349 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:49:55 np0005604375 podman[74431]: 2026-02-01 14:49:55.943393203 +0000 UTC m=+0.119345908 container init 2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888 (image=quay.io/ceph/ceph:v20, name=clever_kare, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:49:55 np0005604375 podman[74431]: 2026-02-01 14:49:55.947248452 +0000 UTC m=+0.123201117 container start 2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888 (image=quay.io/ceph/ceph:v20, name=clever_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:49:55 np0005604375 podman[74431]: 2026-02-01 14:49:55.95104387 +0000 UTC m=+0.126996575 container attach 2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888 (image=quay.io/ceph/ceph:v20, name=clever_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:49:55 np0005604375 clever_kare[74447]: AQATaH9pcU0/OhAA5/qznE0OF88dqcubdDoRWg==
Feb  1 09:49:55 np0005604375 systemd[1]: libpod-2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888.scope: Deactivated successfully.
Feb  1 09:49:55 np0005604375 podman[74431]: 2026-02-01 14:49:55.981253355 +0000 UTC m=+0.157206010 container died 2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888 (image=quay.io/ceph/ceph:v20, name=clever_kare, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:49:55 np0005604375 systemd[1]: var-lib-containers-storage-overlay-521ca0820afe508ddc79a29da2ac46cc6115005ec01cfb51a19d8e5f050c9bf7-merged.mount: Deactivated successfully.
Feb  1 09:49:56 np0005604375 podman[74431]: 2026-02-01 14:49:56.008938448 +0000 UTC m=+0.184891103 container remove 2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888 (image=quay.io/ceph/ceph:v20, name=clever_kare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 09:49:56 np0005604375 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  1 09:49:56 np0005604375 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  1 09:49:56 np0005604375 systemd[1]: libpod-conmon-2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888.scope: Deactivated successfully.
Feb  1 09:49:56 np0005604375 podman[74466]: 2026-02-01 14:49:56.065482918 +0000 UTC m=+0.040138017 container create 4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086 (image=quay.io/ceph/ceph:v20, name=clever_tesla, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:49:56 np0005604375 systemd[1]: Started libpod-conmon-4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086.scope.
Feb  1 09:49:56 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:49:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcfbc1e71b70e2e4a3bac4223cca4c065a30e21007e28e13933b952f9d5d6ba4/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:56 np0005604375 podman[74466]: 2026-02-01 14:49:56.121995647 +0000 UTC m=+0.096650846 container init 4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086 (image=quay.io/ceph/ceph:v20, name=clever_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:49:56 np0005604375 podman[74466]: 2026-02-01 14:49:56.129946712 +0000 UTC m=+0.104601841 container start 4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086 (image=quay.io/ceph/ceph:v20, name=clever_tesla, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:49:56 np0005604375 podman[74466]: 2026-02-01 14:49:56.134178222 +0000 UTC m=+0.108833401 container attach 4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086 (image=quay.io/ceph/ceph:v20, name=clever_tesla, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  1 09:49:56 np0005604375 podman[74466]: 2026-02-01 14:49:56.048663432 +0000 UTC m=+0.023318561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:49:56 np0005604375 clever_tesla[74482]: /usr/bin/monmaptool: monmap file /tmp/monmap
Feb  1 09:49:56 np0005604375 clever_tesla[74482]: setting min_mon_release = tentacle
Feb  1 09:49:56 np0005604375 clever_tesla[74482]: /usr/bin/monmaptool: set fsid to 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb  1 09:49:56 np0005604375 clever_tesla[74482]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Feb  1 09:49:56 np0005604375 systemd[1]: libpod-4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086.scope: Deactivated successfully.
Feb  1 09:49:56 np0005604375 podman[74466]: 2026-02-01 14:49:56.177717944 +0000 UTC m=+0.152373063 container died 4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086 (image=quay.io/ceph/ceph:v20, name=clever_tesla, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:49:56 np0005604375 podman[74466]: 2026-02-01 14:49:56.213080815 +0000 UTC m=+0.187735934 container remove 4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086 (image=quay.io/ceph/ceph:v20, name=clever_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:49:56 np0005604375 systemd[1]: libpod-conmon-4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086.scope: Deactivated successfully.
Feb  1 09:49:56 np0005604375 podman[74502]: 2026-02-01 14:49:56.302704611 +0000 UTC m=+0.061054569 container create 9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8 (image=quay.io/ceph/ceph:v20, name=wizardly_austin, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:49:56 np0005604375 systemd[1]: Started libpod-conmon-9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8.scope.
Feb  1 09:49:56 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:49:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14fe84c648ecac1a14810993ccf3f4051a61ec949c6166ad2c290d11ef6674a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14fe84c648ecac1a14810993ccf3f4051a61ec949c6166ad2c290d11ef6674a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14fe84c648ecac1a14810993ccf3f4051a61ec949c6166ad2c290d11ef6674a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14fe84c648ecac1a14810993ccf3f4051a61ec949c6166ad2c290d11ef6674a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:56 np0005604375 podman[74502]: 2026-02-01 14:49:56.277496848 +0000 UTC m=+0.035846856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:49:56 np0005604375 podman[74502]: 2026-02-01 14:49:56.378392433 +0000 UTC m=+0.136742431 container init 9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8 (image=quay.io/ceph/ceph:v20, name=wizardly_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:49:56 np0005604375 podman[74502]: 2026-02-01 14:49:56.392936754 +0000 UTC m=+0.151286712 container start 9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8 (image=quay.io/ceph/ceph:v20, name=wizardly_austin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 09:49:56 np0005604375 podman[74502]: 2026-02-01 14:49:56.39703289 +0000 UTC m=+0.155382918 container attach 9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8 (image=quay.io/ceph/ceph:v20, name=wizardly_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  1 09:49:56 np0005604375 systemd[1]: libpod-9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8.scope: Deactivated successfully.
Feb  1 09:49:56 np0005604375 podman[74502]: 2026-02-01 14:49:56.499259273 +0000 UTC m=+0.257609201 container died 9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8 (image=quay.io/ceph/ceph:v20, name=wizardly_austin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  1 09:49:56 np0005604375 podman[74502]: 2026-02-01 14:49:56.538263857 +0000 UTC m=+0.296613795 container remove 9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8 (image=quay.io/ceph/ceph:v20, name=wizardly_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  1 09:49:56 np0005604375 systemd[1]: libpod-conmon-9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8.scope: Deactivated successfully.
Feb  1 09:49:56 np0005604375 systemd[1]: Reloading.
Feb  1 09:49:56 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:49:56 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:49:56 np0005604375 systemd[1]: Reloading.
Feb  1 09:49:56 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:49:56 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:49:57 np0005604375 systemd[1]: Reached target All Ceph clusters and services.
Feb  1 09:49:57 np0005604375 systemd[1]: Reloading.
Feb  1 09:49:57 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:49:57 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:49:57 np0005604375 systemd[1]: Reached target Ceph cluster 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:49:57 np0005604375 systemd[1]: Reloading.
Feb  1 09:49:57 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:49:57 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:49:57 np0005604375 systemd[1]: Reloading.
Feb  1 09:49:57 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:49:57 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:49:57 np0005604375 systemd[1]: Created slice Slice /system/ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:49:57 np0005604375 systemd[1]: Reached target System Time Set.
Feb  1 09:49:57 np0005604375 systemd[1]: Reached target System Time Synchronized.
Feb  1 09:49:57 np0005604375 systemd[1]: Starting Ceph mon.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb  1 09:49:57 np0005604375 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  1 09:49:57 np0005604375 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  1 09:49:57 np0005604375 podman[74796]: 2026-02-01 14:49:57.940936158 +0000 UTC m=+0.046715743 container create 1a7992cf4fd21d61043d19f015b7ab5f12d581f0bae0bec0dcb58ede0a6364a4 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:49:57 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ed5d548b4ae2b619d03227f2925dd04965bf2d40a59fdb81d8db35ef25fbfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:57 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ed5d548b4ae2b619d03227f2925dd04965bf2d40a59fdb81d8db35ef25fbfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:57 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ed5d548b4ae2b619d03227f2925dd04965bf2d40a59fdb81d8db35ef25fbfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:57 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ed5d548b4ae2b619d03227f2925dd04965bf2d40a59fdb81d8db35ef25fbfe/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:57 np0005604375 podman[74796]: 2026-02-01 14:49:57.998375873 +0000 UTC m=+0.104155468 container init 1a7992cf4fd21d61043d19f015b7ab5f12d581f0bae0bec0dcb58ede0a6364a4 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  1 09:49:58 np0005604375 podman[74796]: 2026-02-01 14:49:58.005769343 +0000 UTC m=+0.111548908 container start 1a7992cf4fd21d61043d19f015b7ab5f12d581f0bae0bec0dcb58ede0a6364a4 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:49:58 np0005604375 bash[74796]: 1a7992cf4fd21d61043d19f015b7ab5f12d581f0bae0bec0dcb58ede0a6364a4
Feb  1 09:49:58 np0005604375 podman[74796]: 2026-02-01 14:49:57.917475814 +0000 UTC m=+0.023255429 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:49:58 np0005604375 systemd[1]: Started Ceph mon.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: set uid:gid to 167:167 (ceph:ceph)
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: pidfile_write: ignore empty --pid-file
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: load: jerasure load: lrc 
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: RocksDB version: 7.9.2
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Git sha 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: DB SUMMARY
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: DB Session ID:  K5YBZO4V0HPEJZNFFZIL
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: CURRENT file:  CURRENT
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: IDENTITY file:  IDENTITY
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                         Options.error_if_exists: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                       Options.create_if_missing: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                         Options.paranoid_checks: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                                     Options.env: 0x56348156d440
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                                      Options.fs: PosixFileSystem
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                                Options.info_log: 0x5634833e73e0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                Options.max_file_opening_threads: 16
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                              Options.statistics: (nil)
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                               Options.use_fsync: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                       Options.max_log_file_size: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                         Options.allow_fallocate: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                        Options.use_direct_reads: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:          Options.create_missing_column_families: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                              Options.db_log_dir: 
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                                 Options.wal_dir: 
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                   Options.advise_random_on_open: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                    Options.write_buffer_manager: 0x563483366140
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                            Options.rate_limiter: (nil)
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                  Options.unordered_write: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                               Options.row_cache: None
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                              Options.wal_filter: None
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.allow_ingest_behind: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.two_write_queues: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.manual_wal_flush: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.wal_compression: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.atomic_flush: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                 Options.log_readahead_size: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.allow_data_in_errors: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.db_host_id: __hostname__
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.max_background_jobs: 2
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.max_background_compactions: -1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.max_subcompactions: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.max_total_wal_size: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                          Options.max_open_files: -1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                          Options.bytes_per_sync: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:       Options.compaction_readahead_size: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                  Options.max_background_flushes: -1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Compression algorithms supported:
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: #011kZSTD supported: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: #011kXpressCompression supported: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: #011kBZip2Compression supported: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: #011kLZ4Compression supported: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: #011kZlibCompression supported: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: #011kSnappyCompression supported: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:           Options.merge_operator: 
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:        Options.compaction_filter: None
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563483372600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634833578d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:        Options.write_buffer_size: 33554432
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:  Options.max_write_buffer_number: 2
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:          Options.compression: NoCompression
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.num_levels: 7
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 22ff331c-3ab9-4629-8bb9-0845546f6646
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957398061993, "job": 1, "event": "recovery_started", "wal_files": [4]}
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957398068450, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "K5YBZO4V0HPEJZNFFZIL", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957398068570, "job": 1, "event": "recovery_finished"}
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Feb  1 09:49:58 np0005604375 podman[74816]: 2026-02-01 14:49:58.078237063 +0000 UTC m=+0.043362098 container create 39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94 (image=quay.io/ceph/ceph:v20, name=hopeful_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563483384e00
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: DB pointer 0x5634834d0000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634833578d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@-1(???) e0 preinit fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(probing) e0 win_standalone_election
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(probing) e1 win_standalone_election
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: paxos.0).electionLogic(2) init, last seen epoch 2
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(cluster) log [DBG] : fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(cluster) log [DBG] : last_changed 2026-02-01T14:49:56.174590+0000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(cluster) log [DBG] : created 2026-02-01T14:49:56.174590+0000
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-02-01T14:49:56.448252Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,os=Linux}
Feb  1 09:49:58 np0005604375 systemd[1]: Started libpod-conmon-39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94.scope.
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).mds e1 new map
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-02-01T14:49:58:117399+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(cluster) log [DBG] : fsmap 
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mkfs 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb  1 09:49:58 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb  1 09:49:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d530cffa6c0577a421f423b0fb914bcecd2801c896d66f51ea2639a84f527f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d530cffa6c0577a421f423b0fb914bcecd2801c896d66f51ea2639a84f527f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  1 09:49:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d530cffa6c0577a421f423b0fb914bcecd2801c896d66f51ea2639a84f527f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:58 np0005604375 podman[74816]: 2026-02-01 14:49:58.054225604 +0000 UTC m=+0.019350659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:49:58 np0005604375 podman[74816]: 2026-02-01 14:49:58.167549651 +0000 UTC m=+0.132674696 container init 39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94 (image=quay.io/ceph/ceph:v20, name=hopeful_shannon, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:49:58 np0005604375 podman[74816]: 2026-02-01 14:49:58.175464734 +0000 UTC m=+0.140589809 container start 39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94 (image=quay.io/ceph/ceph:v20, name=hopeful_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:49:58 np0005604375 podman[74816]: 2026-02-01 14:49:58.179038616 +0000 UTC m=+0.144163701 container attach 39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94 (image=quay.io/ceph/ceph:v20, name=hopeful_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867840613' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]:  cluster:
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]:    id:     2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]:    health: HEALTH_OK
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]: 
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]:  services:
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]:    mon: 1 daemons, quorum compute-0 (age 0.2515s) [leader: compute-0]
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]:    mgr: no daemons active
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]:    osd: 0 osds: 0 up, 0 in
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]: 
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]:  data:
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]:    pools:   0 pools, 0 pgs
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]:    objects: 0 objects, 0 B
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]:    usage:   0 B used, 0 B / 0 B avail
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]:    pgs:     
Feb  1 09:49:58 np0005604375 hopeful_shannon[74870]: 
Feb  1 09:49:58 np0005604375 systemd[1]: libpod-39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94.scope: Deactivated successfully.
Feb  1 09:49:58 np0005604375 podman[74897]: 2026-02-01 14:49:58.433606899 +0000 UTC m=+0.037909584 container died 39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94 (image=quay.io/ceph/ceph:v20, name=hopeful_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:49:58 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a3d530cffa6c0577a421f423b0fb914bcecd2801c896d66f51ea2639a84f527f-merged.mount: Deactivated successfully.
Feb  1 09:49:58 np0005604375 podman[74897]: 2026-02-01 14:49:58.476875114 +0000 UTC m=+0.081177789 container remove 39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94 (image=quay.io/ceph/ceph:v20, name=hopeful_shannon, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:49:58 np0005604375 systemd[1]: libpod-conmon-39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94.scope: Deactivated successfully.
Feb  1 09:49:58 np0005604375 podman[74912]: 2026-02-01 14:49:58.555562549 +0000 UTC m=+0.049552022 container create 6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d (image=quay.io/ceph/ceph:v20, name=keen_robinson, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  1 09:49:58 np0005604375 systemd[1]: Started libpod-conmon-6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d.scope.
Feb  1 09:49:58 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:49:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1130ee3875f51f79aa1f01a8fc533237008b6e9fbd471e2199526ab25e76e2cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1130ee3875f51f79aa1f01a8fc533237008b6e9fbd471e2199526ab25e76e2cd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1130ee3875f51f79aa1f01a8fc533237008b6e9fbd471e2199526ab25e76e2cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1130ee3875f51f79aa1f01a8fc533237008b6e9fbd471e2199526ab25e76e2cd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:58 np0005604375 podman[74912]: 2026-02-01 14:49:58.536757288 +0000 UTC m=+0.030746761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:49:58 np0005604375 podman[74912]: 2026-02-01 14:49:58.658927894 +0000 UTC m=+0.152917447 container init 6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d (image=quay.io/ceph/ceph:v20, name=keen_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  1 09:49:58 np0005604375 podman[74912]: 2026-02-01 14:49:58.664941844 +0000 UTC m=+0.158931287 container start 6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d (image=quay.io/ceph/ceph:v20, name=keen_robinson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Feb  1 09:49:58 np0005604375 podman[74912]: 2026-02-01 14:49:58.669495113 +0000 UTC m=+0.163484566 container attach 6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d (image=quay.io/ceph/ceph:v20, name=keen_robinson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1195388091' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  1 09:49:58 np0005604375 ceph-mon[74815]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1195388091' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  1 09:49:58 np0005604375 keen_robinson[74929]: 
Feb  1 09:49:58 np0005604375 keen_robinson[74929]: [global]
Feb  1 09:49:58 np0005604375 keen_robinson[74929]: #011fsid = 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb  1 09:49:58 np0005604375 keen_robinson[74929]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb  1 09:49:58 np0005604375 keen_robinson[74929]: #011osd_crush_chooseleaf_type = 0
Feb  1 09:49:58 np0005604375 systemd[1]: libpod-6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d.scope: Deactivated successfully.
Feb  1 09:49:58 np0005604375 podman[74955]: 2026-02-01 14:49:58.897970448 +0000 UTC m=+0.025562174 container died 6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d (image=quay.io/ceph/ceph:v20, name=keen_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:49:59 np0005604375 systemd[1]: var-lib-containers-storage-overlay-1130ee3875f51f79aa1f01a8fc533237008b6e9fbd471e2199526ab25e76e2cd-merged.mount: Deactivated successfully.
Feb  1 09:49:59 np0005604375 podman[74955]: 2026-02-01 14:49:59.042549229 +0000 UTC m=+0.170140945 container remove 6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d (image=quay.io/ceph/ceph:v20, name=keen_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:49:59 np0005604375 systemd[1]: libpod-conmon-6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d.scope: Deactivated successfully.
Feb  1 09:49:59 np0005604375 podman[74972]: 2026-02-01 14:49:59.09062892 +0000 UTC m=+0.031214725 container create 101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d (image=quay.io/ceph/ceph:v20, name=sweet_edison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:49:59 np0005604375 systemd[1]: Started libpod-conmon-101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d.scope.
Feb  1 09:49:59 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:49:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5150d06143fb5557163a0bb3b6069a4620d4674e118e08b2168fd132f7ec3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5150d06143fb5557163a0bb3b6069a4620d4674e118e08b2168fd132f7ec3b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5150d06143fb5557163a0bb3b6069a4620d4674e118e08b2168fd132f7ec3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5150d06143fb5557163a0bb3b6069a4620d4674e118e08b2168fd132f7ec3b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:49:59 np0005604375 ceph-mon[74815]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  1 09:49:59 np0005604375 ceph-mon[74815]: from='client.? 192.168.122.100:0/1195388091' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  1 09:49:59 np0005604375 ceph-mon[74815]: from='client.? 192.168.122.100:0/1195388091' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  1 09:49:59 np0005604375 podman[74972]: 2026-02-01 14:49:59.168509684 +0000 UTC m=+0.109095469 container init 101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d (image=quay.io/ceph/ceph:v20, name=sweet_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:49:59 np0005604375 podman[74972]: 2026-02-01 14:49:59.075551053 +0000 UTC m=+0.016136858 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:49:59 np0005604375 podman[74972]: 2026-02-01 14:49:59.175212453 +0000 UTC m=+0.115798228 container start 101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d (image=quay.io/ceph/ceph:v20, name=sweet_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  1 09:49:59 np0005604375 podman[74972]: 2026-02-01 14:49:59.178462425 +0000 UTC m=+0.119048250 container attach 101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d (image=quay.io/ceph/ceph:v20, name=sweet_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:49:59 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:49:59 np0005604375 ceph-mon[74815]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4183917051' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:49:59 np0005604375 systemd[1]: libpod-101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d.scope: Deactivated successfully.
Feb  1 09:49:59 np0005604375 podman[74972]: 2026-02-01 14:49:59.394657683 +0000 UTC m=+0.335243488 container died 101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d (image=quay.io/ceph/ceph:v20, name=sweet_edison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  1 09:49:59 np0005604375 systemd[1]: var-lib-containers-storage-overlay-eb5150d06143fb5557163a0bb3b6069a4620d4674e118e08b2168fd132f7ec3b-merged.mount: Deactivated successfully.
Feb  1 09:49:59 np0005604375 podman[74972]: 2026-02-01 14:49:59.43765069 +0000 UTC m=+0.378236475 container remove 101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d (image=quay.io/ceph/ceph:v20, name=sweet_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  1 09:49:59 np0005604375 systemd[1]: libpod-conmon-101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d.scope: Deactivated successfully.
Feb  1 09:49:59 np0005604375 systemd[1]: Stopping Ceph mon.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb  1 09:49:59 np0005604375 ceph-mon[74815]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb  1 09:49:59 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb  1 09:49:59 np0005604375 ceph-mon[74815]: mon.compute-0@0(leader) e1 shutdown
Feb  1 09:49:59 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0[74811]: 2026-02-01T14:49:59.630+0000 7f4e50781640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb  1 09:49:59 np0005604375 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  1 09:49:59 np0005604375 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  1 09:49:59 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0[74811]: 2026-02-01T14:49:59.630+0000 7f4e50781640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb  1 09:49:59 np0005604375 podman[75055]: 2026-02-01 14:49:59.885698058 +0000 UTC m=+0.285394197 container died 1a7992cf4fd21d61043d19f015b7ab5f12d581f0bae0bec0dcb58ede0a6364a4 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:49:59 np0005604375 systemd[1]: var-lib-containers-storage-overlay-70ed5d548b4ae2b619d03227f2925dd04965bf2d40a59fdb81d8db35ef25fbfe-merged.mount: Deactivated successfully.
Feb  1 09:49:59 np0005604375 podman[75055]: 2026-02-01 14:49:59.922645073 +0000 UTC m=+0.322341242 container remove 1a7992cf4fd21d61043d19f015b7ab5f12d581f0bae0bec0dcb58ede0a6364a4 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:49:59 np0005604375 bash[75055]: ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0
Feb  1 09:49:59 np0005604375 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  1 09:50:00 np0005604375 systemd[1]: ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mon.compute-0.service: Deactivated successfully.
Feb  1 09:50:00 np0005604375 systemd[1]: Stopped Ceph mon.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:50:00 np0005604375 systemd[1]: Starting Ceph mon.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb  1 09:50:00 np0005604375 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  1 09:50:00 np0005604375 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  1 09:50:00 np0005604375 podman[75159]: 2026-02-01 14:50:00.23563625 +0000 UTC m=+0.035272269 container create 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Feb  1 09:50:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf3559e0dbd98854c2d188c21775f26efc99d5eb7b2314c047ff1c2acbce4f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf3559e0dbd98854c2d188c21775f26efc99d5eb7b2314c047ff1c2acbce4f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf3559e0dbd98854c2d188c21775f26efc99d5eb7b2314c047ff1c2acbce4f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf3559e0dbd98854c2d188c21775f26efc99d5eb7b2314c047ff1c2acbce4f5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:00 np0005604375 podman[75159]: 2026-02-01 14:50:00.282023543 +0000 UTC m=+0.081659552 container init 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:00 np0005604375 podman[75159]: 2026-02-01 14:50:00.294418403 +0000 UTC m=+0.094054422 container start 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:00 np0005604375 bash[75159]: 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41
Feb  1 09:50:00 np0005604375 podman[75159]: 2026-02-01 14:50:00.218412923 +0000 UTC m=+0.018048942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:00 np0005604375 systemd[1]: Started Ceph mon.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: set uid:gid to 167:167 (ceph:ceph)
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: pidfile_write: ignore empty --pid-file
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: load: jerasure load: lrc 
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: RocksDB version: 7.9.2
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Git sha 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: DB SUMMARY
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: DB Session ID:  9H8HU9QM155BYJ6W9TB0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: CURRENT file:  CURRENT
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: IDENTITY file:  IDENTITY
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                         Options.error_if_exists: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                       Options.create_if_missing: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                         Options.paranoid_checks: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                                     Options.env: 0x5635c4a03440
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                                      Options.fs: PosixFileSystem
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                                Options.info_log: 0x5635c5d0fe80
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                Options.max_file_opening_threads: 16
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                              Options.statistics: (nil)
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                               Options.use_fsync: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                       Options.max_log_file_size: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                         Options.allow_fallocate: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                        Options.use_direct_reads: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:          Options.create_missing_column_families: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                              Options.db_log_dir: 
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                                 Options.wal_dir: 
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                   Options.advise_random_on_open: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                    Options.write_buffer_manager: 0x5635c5d5a140
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                            Options.rate_limiter: (nil)
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                  Options.unordered_write: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                               Options.row_cache: None
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                              Options.wal_filter: None
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.allow_ingest_behind: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.two_write_queues: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.manual_wal_flush: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.wal_compression: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.atomic_flush: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                 Options.log_readahead_size: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.allow_data_in_errors: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.db_host_id: __hostname__
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.max_background_jobs: 2
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.max_background_compactions: -1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.max_subcompactions: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.max_total_wal_size: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                          Options.max_open_files: -1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                          Options.bytes_per_sync: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:       Options.compaction_readahead_size: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                  Options.max_background_flushes: -1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Compression algorithms supported:
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: #011kZSTD supported: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: #011kXpressCompression supported: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: #011kBZip2Compression supported: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: #011kLZ4Compression supported: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: #011kZlibCompression supported: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: #011kSnappyCompression supported: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:           Options.merge_operator: 
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:        Options.compaction_filter: None
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5635c5d66a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5635c5d4b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:        Options.write_buffer_size: 33554432
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:  Options.max_write_buffer_number: 2
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:          Options.compression: NoCompression
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.num_levels: 7
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 22ff331c-3ab9-4629-8bb9-0845546f6646
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957400356976, "job": 1, "event": "recovery_started", "wal_files": [9]}
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957400362520, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957400, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957400362631, "job": 1, "event": "recovery_finished"}
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5635c5d78e00
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: DB pointer 0x5635c5ec2000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.44 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.44 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5635c5d4b8d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0@-1(???) e1 preinit fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0@-1(???).mds e1 new map
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-02-01T14:49:58:117399+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(probing) e1 win_standalone_election
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Feb  1 09:50:00 np0005604375 podman[75180]: 2026-02-01 14:50:00.382349942 +0000 UTC m=+0.052349723 container create 2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be (image=quay.io/ceph/ceph:v20, name=great_lederberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : last_changed 2026-02-01T14:49:56.174590+0000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : created 2026-02-01T14:49:56.174590+0000
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : fsmap 
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb  1 09:50:00 np0005604375 systemd[1]: Started libpod-conmon-2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be.scope.
Feb  1 09:50:00 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12c00ea073014746946adbf38bbffc72e7794034ea9f8084e2201b3b7dde37f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12c00ea073014746946adbf38bbffc72e7794034ea9f8084e2201b3b7dde37f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12c00ea073014746946adbf38bbffc72e7794034ea9f8084e2201b3b7dde37f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  1 09:50:00 np0005604375 podman[75180]: 2026-02-01 14:50:00.455688117 +0000 UTC m=+0.125687978 container init 2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be (image=quay.io/ceph/ceph:v20, name=great_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  1 09:50:00 np0005604375 podman[75180]: 2026-02-01 14:50:00.364541458 +0000 UTC m=+0.034541269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:00 np0005604375 podman[75180]: 2026-02-01 14:50:00.461023758 +0000 UTC m=+0.131023539 container start 2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be (image=quay.io/ceph/ceph:v20, name=great_lederberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  1 09:50:00 np0005604375 podman[75180]: 2026-02-01 14:50:00.469596631 +0000 UTC m=+0.139596432 container attach 2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be (image=quay.io/ceph/ceph:v20, name=great_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb  1 09:50:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Feb  1 09:50:00 np0005604375 systemd[1]: libpod-2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be.scope: Deactivated successfully.
Feb  1 09:50:00 np0005604375 podman[75180]: 2026-02-01 14:50:00.686966921 +0000 UTC m=+0.356966722 container died 2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be (image=quay.io/ceph/ceph:v20, name=great_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  1 09:50:00 np0005604375 podman[75180]: 2026-02-01 14:50:00.728452825 +0000 UTC m=+0.398452606 container remove 2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be (image=quay.io/ceph/ceph:v20, name=great_lederberg, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  1 09:50:00 np0005604375 systemd[1]: libpod-conmon-2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be.scope: Deactivated successfully.
Feb  1 09:50:00 np0005604375 podman[75270]: 2026-02-01 14:50:00.803745666 +0000 UTC m=+0.051879829 container create 1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36 (image=quay.io/ceph/ceph:v20, name=peaceful_shockley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:00 np0005604375 systemd[1]: Started libpod-conmon-1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36.scope.
Feb  1 09:50:00 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e8b083639cd35f2e3418eb30bb6ce75044d80a153dadee7eb0f44cd090b3a1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e8b083639cd35f2e3418eb30bb6ce75044d80a153dadee7eb0f44cd090b3a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e8b083639cd35f2e3418eb30bb6ce75044d80a153dadee7eb0f44cd090b3a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:00 np0005604375 podman[75270]: 2026-02-01 14:50:00.785715216 +0000 UTC m=+0.033849379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:00 np0005604375 podman[75270]: 2026-02-01 14:50:00.901701258 +0000 UTC m=+0.149835461 container init 1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36 (image=quay.io/ceph/ceph:v20, name=peaceful_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:00 np0005604375 podman[75270]: 2026-02-01 14:50:00.909368215 +0000 UTC m=+0.157502368 container start 1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36 (image=quay.io/ceph/ceph:v20, name=peaceful_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 09:50:00 np0005604375 podman[75270]: 2026-02-01 14:50:00.920209162 +0000 UTC m=+0.168343385 container attach 1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36 (image=quay.io/ceph/ceph:v20, name=peaceful_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  1 09:50:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Feb  1 09:50:01 np0005604375 systemd[1]: libpod-1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36.scope: Deactivated successfully.
Feb  1 09:50:01 np0005604375 podman[75270]: 2026-02-01 14:50:01.147440621 +0000 UTC m=+0.395574774 container died 1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36 (image=quay.io/ceph/ceph:v20, name=peaceful_shockley, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:01 np0005604375 systemd[1]: var-lib-containers-storage-overlay-82e8b083639cd35f2e3418eb30bb6ce75044d80a153dadee7eb0f44cd090b3a1-merged.mount: Deactivated successfully.
Feb  1 09:50:01 np0005604375 podman[75270]: 2026-02-01 14:50:01.185267642 +0000 UTC m=+0.433401765 container remove 1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36 (image=quay.io/ceph/ceph:v20, name=peaceful_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:01 np0005604375 systemd[1]: libpod-conmon-1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36.scope: Deactivated successfully.
Feb  1 09:50:01 np0005604375 systemd[1]: Reloading.
Feb  1 09:50:01 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:50:01 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:50:01 np0005604375 systemd[1]: Reloading.
Feb  1 09:50:01 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:50:01 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:50:01 np0005604375 systemd[1]: Starting Ceph mgr.compute-0.viosrg for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb  1 09:50:02 np0005604375 podman[75450]: 2026-02-01 14:50:02.041445179 +0000 UTC m=+0.048739970 container create c0b520f4a0119ce9f8a9371a92144a204b1e0b06ca11020b37e89fb67c28dbf0 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63d461a9ad17540db43665e055dcde16173cf80c235d6007abc5404513771bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63d461a9ad17540db43665e055dcde16173cf80c235d6007abc5404513771bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63d461a9ad17540db43665e055dcde16173cf80c235d6007abc5404513771bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63d461a9ad17540db43665e055dcde16173cf80c235d6007abc5404513771bb/merged/var/lib/ceph/mgr/ceph-compute-0.viosrg supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:02 np0005604375 podman[75450]: 2026-02-01 14:50:02.103256538 +0000 UTC m=+0.110551379 container init c0b520f4a0119ce9f8a9371a92144a204b1e0b06ca11020b37e89fb67c28dbf0 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  1 09:50:02 np0005604375 podman[75450]: 2026-02-01 14:50:02.016283877 +0000 UTC m=+0.023578718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:02 np0005604375 podman[75450]: 2026-02-01 14:50:02.111910503 +0000 UTC m=+0.119205304 container start c0b520f4a0119ce9f8a9371a92144a204b1e0b06ca11020b37e89fb67c28dbf0 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:02 np0005604375 bash[75450]: c0b520f4a0119ce9f8a9371a92144a204b1e0b06ca11020b37e89fb67c28dbf0
Feb  1 09:50:02 np0005604375 systemd[1]: Started Ceph mgr.compute-0.viosrg for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:50:02 np0005604375 ceph-mgr[75469]: set uid:gid to 167:167 (ceph:ceph)
Feb  1 09:50:02 np0005604375 ceph-mgr[75469]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb  1 09:50:02 np0005604375 ceph-mgr[75469]: pidfile_write: ignore empty --pid-file
Feb  1 09:50:02 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'alerts'
Feb  1 09:50:02 np0005604375 podman[75470]: 2026-02-01 14:50:02.214406882 +0000 UTC m=+0.061642865 container create eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961 (image=quay.io/ceph/ceph:v20, name=epic_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:02 np0005604375 systemd[1]: Started libpod-conmon-eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961.scope.
Feb  1 09:50:02 np0005604375 podman[75470]: 2026-02-01 14:50:02.189451266 +0000 UTC m=+0.036687329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:02 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'balancer'
Feb  1 09:50:02 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1aea80a6f2121aa55a109b745afad73d4d2520726ee91210655c39d326e4ef6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1aea80a6f2121aa55a109b745afad73d4d2520726ee91210655c39d326e4ef6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1aea80a6f2121aa55a109b745afad73d4d2520726ee91210655c39d326e4ef6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:02 np0005604375 podman[75470]: 2026-02-01 14:50:02.318602811 +0000 UTC m=+0.165838864 container init eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961 (image=quay.io/ceph/ceph:v20, name=epic_chaum, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  1 09:50:02 np0005604375 podman[75470]: 2026-02-01 14:50:02.325063104 +0000 UTC m=+0.172299097 container start eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961 (image=quay.io/ceph/ceph:v20, name=epic_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:02 np0005604375 podman[75470]: 2026-02-01 14:50:02.329107738 +0000 UTC m=+0.176343821 container attach eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961 (image=quay.io/ceph/ceph:v20, name=epic_chaum, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:02 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'cephadm'
Feb  1 09:50:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  1 09:50:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4243542664' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb  1 09:50:02 np0005604375 epic_chaum[75507]: 
Feb  1 09:50:02 np0005604375 epic_chaum[75507]: {
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    "fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    "health": {
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "status": "HEALTH_OK",
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "checks": {},
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "mutes": []
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    },
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    "election_epoch": 5,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    "quorum": [
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        0
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    ],
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    "quorum_names": [
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "compute-0"
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    ],
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    "quorum_age": 2,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    "monmap": {
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "epoch": 1,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "min_mon_release_name": "tentacle",
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "num_mons": 1
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    },
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    "osdmap": {
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "epoch": 1,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "num_osds": 0,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "num_up_osds": 0,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "osd_up_since": 0,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "num_in_osds": 0,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "osd_in_since": 0,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "num_remapped_pgs": 0
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    },
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    "pgmap": {
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "pgs_by_state": [],
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "num_pgs": 0,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "num_pools": 0,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "num_objects": 0,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "data_bytes": 0,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "bytes_used": 0,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "bytes_avail": 0,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "bytes_total": 0
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    },
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    "fsmap": {
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "epoch": 1,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "btime": "2026-02-01T14:49:58:117399+0000",
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "by_rank": [],
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "up:standby": 0
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    },
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    "mgrmap": {
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "available": false,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "num_standbys": 0,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "modules": [
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:            "iostat",
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:            "nfs"
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        ],
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "services": {}
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    },
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    "servicemap": {
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "epoch": 1,
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "modified": "2026-02-01T14:49:58.120892+0000",
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:        "services": {}
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    },
Feb  1 09:50:02 np0005604375 epic_chaum[75507]:    "progress_events": {}
Feb  1 09:50:02 np0005604375 epic_chaum[75507]: }
Feb  1 09:50:02 np0005604375 systemd[1]: libpod-eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961.scope: Deactivated successfully.
Feb  1 09:50:02 np0005604375 podman[75470]: 2026-02-01 14:50:02.573608317 +0000 UTC m=+0.420844340 container died eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961 (image=quay.io/ceph/ceph:v20, name=epic_chaum, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:02 np0005604375 systemd[1]: var-lib-containers-storage-overlay-b1aea80a6f2121aa55a109b745afad73d4d2520726ee91210655c39d326e4ef6-merged.mount: Deactivated successfully.
Feb  1 09:50:02 np0005604375 podman[75470]: 2026-02-01 14:50:02.618620661 +0000 UTC m=+0.465856644 container remove eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961 (image=quay.io/ceph/ceph:v20, name=epic_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  1 09:50:02 np0005604375 systemd[1]: libpod-conmon-eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961.scope: Deactivated successfully.
Feb  1 09:50:02 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'crash'
Feb  1 09:50:03 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'dashboard'
Feb  1 09:50:03 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'devicehealth'
Feb  1 09:50:03 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'diskprediction_local'
Feb  1 09:50:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  1 09:50:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  1 09:50:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]:  from numpy import show_config as show_numpy_config
Feb  1 09:50:03 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'influx'
Feb  1 09:50:03 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'insights'
Feb  1 09:50:04 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'iostat'
Feb  1 09:50:04 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'k8sevents'
Feb  1 09:50:04 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'localpool'
Feb  1 09:50:04 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'mds_autoscaler'
Feb  1 09:50:04 np0005604375 podman[75557]: 2026-02-01 14:50:04.711585815 +0000 UTC m=+0.068618462 container create 5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c (image=quay.io/ceph/ceph:v20, name=angry_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:04 np0005604375 systemd[1]: Started libpod-conmon-5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c.scope.
Feb  1 09:50:04 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'mirroring'
Feb  1 09:50:04 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19e0b753068d03f70d6bf53583b4a4938912d4833d349c044b0121a7e932ab03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19e0b753068d03f70d6bf53583b4a4938912d4833d349c044b0121a7e932ab03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19e0b753068d03f70d6bf53583b4a4938912d4833d349c044b0121a7e932ab03/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:04 np0005604375 podman[75557]: 2026-02-01 14:50:04.684358165 +0000 UTC m=+0.041390852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:04 np0005604375 podman[75557]: 2026-02-01 14:50:04.790122718 +0000 UTC m=+0.147155435 container init 5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c (image=quay.io/ceph/ceph:v20, name=angry_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:04 np0005604375 podman[75557]: 2026-02-01 14:50:04.793589266 +0000 UTC m=+0.150621913 container start 5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c (image=quay.io/ceph/ceph:v20, name=angry_leavitt, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:04 np0005604375 podman[75557]: 2026-02-01 14:50:04.797556328 +0000 UTC m=+0.154588995 container attach 5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c (image=quay.io/ceph/ceph:v20, name=angry_leavitt, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 09:50:04 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'nfs'
Feb  1 09:50:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  1 09:50:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097144664' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]: 
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]: {
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    "fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    "health": {
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "status": "HEALTH_OK",
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "checks": {},
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "mutes": []
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    },
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    "election_epoch": 5,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    "quorum": [
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        0
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    ],
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    "quorum_names": [
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "compute-0"
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    ],
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    "quorum_age": 4,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    "monmap": {
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "epoch": 1,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "min_mon_release_name": "tentacle",
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "num_mons": 1
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    },
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    "osdmap": {
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "epoch": 1,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "num_osds": 0,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "num_up_osds": 0,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "osd_up_since": 0,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "num_in_osds": 0,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "osd_in_since": 0,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "num_remapped_pgs": 0
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    },
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    "pgmap": {
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "pgs_by_state": [],
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "num_pgs": 0,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "num_pools": 0,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "num_objects": 0,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "data_bytes": 0,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "bytes_used": 0,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "bytes_avail": 0,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "bytes_total": 0
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    },
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    "fsmap": {
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "epoch": 1,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "btime": "2026-02-01T14:49:58:117399+0000",
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "by_rank": [],
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "up:standby": 0
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    },
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    "mgrmap": {
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "available": false,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "num_standbys": 0,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "modules": [
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:            "iostat",
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:            "nfs"
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        ],
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "services": {}
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    },
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    "servicemap": {
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "epoch": 1,
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "modified": "2026-02-01T14:49:58.120892+0000",
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:        "services": {}
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    },
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]:    "progress_events": {}
Feb  1 09:50:04 np0005604375 angry_leavitt[75573]: }
Feb  1 09:50:04 np0005604375 systemd[1]: libpod-5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c.scope: Deactivated successfully.
Feb  1 09:50:05 np0005604375 podman[75599]: 2026-02-01 14:50:05.02274794 +0000 UTC m=+0.021881240 container died 5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c (image=quay.io/ceph/ceph:v20, name=angry_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:05 np0005604375 systemd[1]: var-lib-containers-storage-overlay-19e0b753068d03f70d6bf53583b4a4938912d4833d349c044b0121a7e932ab03-merged.mount: Deactivated successfully.
Feb  1 09:50:05 np0005604375 podman[75599]: 2026-02-01 14:50:05.057560145 +0000 UTC m=+0.056693425 container remove 5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c (image=quay.io/ceph/ceph:v20, name=angry_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  1 09:50:05 np0005604375 systemd[1]: libpod-conmon-5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c.scope: Deactivated successfully.
Feb  1 09:50:05 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'orchestrator'
Feb  1 09:50:05 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'osd_perf_query'
Feb  1 09:50:05 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'osd_support'
Feb  1 09:50:05 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'pg_autoscaler'
Feb  1 09:50:05 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'progress'
Feb  1 09:50:05 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'prometheus'
Feb  1 09:50:05 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'rbd_support'
Feb  1 09:50:05 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'rgw'
Feb  1 09:50:06 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'rook'
Feb  1 09:50:06 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'selftest'
Feb  1 09:50:06 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'smb'
Feb  1 09:50:06 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'snap_schedule'
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'stats'
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'status'
Feb  1 09:50:07 np0005604375 podman[75615]: 2026-02-01 14:50:07.142874953 +0000 UTC m=+0.057695064 container create 1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010 (image=quay.io/ceph/ceph:v20, name=nice_mcclintock, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:07 np0005604375 systemd[1]: Started libpod-conmon-1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010.scope.
Feb  1 09:50:07 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:07 np0005604375 podman[75615]: 2026-02-01 14:50:07.117719071 +0000 UTC m=+0.032539222 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:07 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea71c4c08ae8e24479ddc8aba0a4b74afcd55c83a86befade8c9495b287efa1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:07 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea71c4c08ae8e24479ddc8aba0a4b74afcd55c83a86befade8c9495b287efa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:07 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea71c4c08ae8e24479ddc8aba0a4b74afcd55c83a86befade8c9495b287efa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'telegraf'
Feb  1 09:50:07 np0005604375 podman[75615]: 2026-02-01 14:50:07.248146592 +0000 UTC m=+0.162966753 container init 1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010 (image=quay.io/ceph/ceph:v20, name=nice_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  1 09:50:07 np0005604375 podman[75615]: 2026-02-01 14:50:07.252588907 +0000 UTC m=+0.167409018 container start 1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010 (image=quay.io/ceph/ceph:v20, name=nice_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  1 09:50:07 np0005604375 podman[75615]: 2026-02-01 14:50:07.256171999 +0000 UTC m=+0.170992120 container attach 1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010 (image=quay.io/ceph/ceph:v20, name=nice_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'telemetry'
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'test_orchestrator'
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4260895271' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]: 
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]: {
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    "fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    "health": {
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "status": "HEALTH_OK",
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "checks": {},
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "mutes": []
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    },
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    "election_epoch": 5,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    "quorum": [
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        0
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    ],
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    "quorum_names": [
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "compute-0"
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    ],
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    "quorum_age": 7,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    "monmap": {
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "epoch": 1,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "min_mon_release_name": "tentacle",
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "num_mons": 1
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    },
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    "osdmap": {
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "epoch": 1,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "num_osds": 0,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "num_up_osds": 0,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "osd_up_since": 0,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "num_in_osds": 0,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "osd_in_since": 0,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "num_remapped_pgs": 0
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    },
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    "pgmap": {
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "pgs_by_state": [],
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "num_pgs": 0,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "num_pools": 0,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "num_objects": 0,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "data_bytes": 0,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "bytes_used": 0,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "bytes_avail": 0,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "bytes_total": 0
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    },
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    "fsmap": {
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "epoch": 1,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "btime": "2026-02-01T14:49:58:117399+0000",
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "by_rank": [],
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "up:standby": 0
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    },
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    "mgrmap": {
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "available": false,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "num_standbys": 0,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "modules": [
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:            "iostat",
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:            "nfs"
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        ],
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "services": {}
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    },
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    "servicemap": {
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "epoch": 1,
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "modified": "2026-02-01T14:49:58.120892+0000",
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:        "services": {}
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    },
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]:    "progress_events": {}
Feb  1 09:50:07 np0005604375 nice_mcclintock[75633]: }
Feb  1 09:50:07 np0005604375 systemd[1]: libpod-1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010.scope: Deactivated successfully.
Feb  1 09:50:07 np0005604375 podman[75615]: 2026-02-01 14:50:07.456992101 +0000 UTC m=+0.371812172 container died 1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010 (image=quay.io/ceph/ceph:v20, name=nice_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  1 09:50:07 np0005604375 systemd[1]: var-lib-containers-storage-overlay-9ea71c4c08ae8e24479ddc8aba0a4b74afcd55c83a86befade8c9495b287efa1-merged.mount: Deactivated successfully.
Feb  1 09:50:07 np0005604375 podman[75615]: 2026-02-01 14:50:07.496026186 +0000 UTC m=+0.410846257 container remove 1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010 (image=quay.io/ceph/ceph:v20, name=nice_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:07 np0005604375 systemd[1]: libpod-conmon-1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010.scope: Deactivated successfully.
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'volumes'
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: ms_deliver_dispatch: unhandled message 0x56054db29860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.viosrg
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr handle_mgr_map Activating!
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr handle_mgr_map I am now activating
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.viosrg(active, starting, since 0.0122349s)
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "mds metadata"} : dispatch
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).mds e1 all = 1
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata"} : dispatch
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata"} : dispatch
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.viosrg", "id": "compute-0.viosrg"} v 0)
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "mgr metadata", "who": "compute-0.viosrg", "id": "compute-0.viosrg"} : dispatch
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: balancer
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : Manager daemon compute-0.viosrg is now available
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: crash
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [balancer INFO root] Starting
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: devicehealth
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [devicehealth INFO root] Starting
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: iostat
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: nfs
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:50:07
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [balancer INFO root] No pools available
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: orchestrator
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: pg_autoscaler
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: progress
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [progress INFO root] Loading...
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [progress INFO root] No stored events to load
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [progress INFO root] Loaded [] historic events
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [progress INFO root] Loaded OSDMap, ready.
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] recovery thread starting
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] starting setup
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: rbd_support
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: status
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/mirror_snapshot_schedule"} v 0)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/mirror_snapshot_schedule"} : dispatch
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: telemetry
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] PerfHandler: starting
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TaskHandler: starting
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/trash_purge_schedule"} v 0)
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/trash_purge_schedule"} : dispatch
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] setup complete
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Feb  1 09:50:07 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: volumes
Feb  1 09:50:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:08 np0005604375 ceph-mon[75179]: Activating manager daemon compute-0.viosrg
Feb  1 09:50:08 np0005604375 ceph-mon[75179]: Manager daemon compute-0.viosrg is now available
Feb  1 09:50:08 np0005604375 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/mirror_snapshot_schedule"} : dispatch
Feb  1 09:50:08 np0005604375 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/trash_purge_schedule"} : dispatch
Feb  1 09:50:08 np0005604375 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:08 np0005604375 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:08 np0005604375 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:08 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.viosrg(active, since 1.02628s)
Feb  1 09:50:09 np0005604375 podman[75749]: 2026-02-01 14:50:09.58053212 +0000 UTC m=+0.062066847 container create 59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d (image=quay.io/ceph/ceph:v20, name=bold_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:09 np0005604375 systemd[1]: Started libpod-conmon-59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d.scope.
Feb  1 09:50:09 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:09 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26aae273b1f0f8666f1af297fa04772abc7da434b7d7659ca3e7107523f494e2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:09 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26aae273b1f0f8666f1af297fa04772abc7da434b7d7659ca3e7107523f494e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:09 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26aae273b1f0f8666f1af297fa04772abc7da434b7d7659ca3e7107523f494e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:09 np0005604375 podman[75749]: 2026-02-01 14:50:09.553779803 +0000 UTC m=+0.035314590 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:09 np0005604375 podman[75749]: 2026-02-01 14:50:09.667657296 +0000 UTC m=+0.149192083 container init 59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d (image=quay.io/ceph/ceph:v20, name=bold_snyder, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  1 09:50:09 np0005604375 podman[75749]: 2026-02-01 14:50:09.674812968 +0000 UTC m=+0.156347685 container start 59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d (image=quay.io/ceph/ceph:v20, name=bold_snyder, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:09 np0005604375 podman[75749]: 2026-02-01 14:50:09.678557104 +0000 UTC m=+0.160091881 container attach 59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d (image=quay.io/ceph/ceph:v20, name=bold_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  1 09:50:09 np0005604375 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  1 09:50:09 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:09 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.viosrg(active, since 2s)
Feb  1 09:50:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  1 09:50:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2673905382' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb  1 09:50:10 np0005604375 bold_snyder[75766]: 
Feb  1 09:50:10 np0005604375 bold_snyder[75766]: {
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    "fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    "health": {
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "status": "HEALTH_OK",
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "checks": {},
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "mutes": []
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    },
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    "election_epoch": 5,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    "quorum": [
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        0
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    ],
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    "quorum_names": [
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "compute-0"
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    ],
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    "quorum_age": 9,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    "monmap": {
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "epoch": 1,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "min_mon_release_name": "tentacle",
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "num_mons": 1
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    },
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    "osdmap": {
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "epoch": 1,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "num_osds": 0,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "num_up_osds": 0,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "osd_up_since": 0,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "num_in_osds": 0,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "osd_in_since": 0,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "num_remapped_pgs": 0
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    },
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    "pgmap": {
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "pgs_by_state": [],
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "num_pgs": 0,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "num_pools": 0,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "num_objects": 0,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "data_bytes": 0,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "bytes_used": 0,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "bytes_avail": 0,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "bytes_total": 0
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    },
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    "fsmap": {
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "epoch": 1,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "btime": "2026-02-01T14:49:58:117399+0000",
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "by_rank": [],
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "up:standby": 0
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    },
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    "mgrmap": {
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "available": true,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "num_standbys": 0,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "modules": [
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:            "iostat",
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:            "nfs"
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        ],
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "services": {}
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    },
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    "servicemap": {
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "epoch": 1,
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "modified": "2026-02-01T14:49:58.120892+0000",
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:        "services": {}
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    },
Feb  1 09:50:10 np0005604375 bold_snyder[75766]:    "progress_events": {}
Feb  1 09:50:10 np0005604375 bold_snyder[75766]: }
Feb  1 09:50:10 np0005604375 systemd[1]: libpod-59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d.scope: Deactivated successfully.
Feb  1 09:50:10 np0005604375 podman[75749]: 2026-02-01 14:50:10.201884963 +0000 UTC m=+0.683419740 container died 59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d (image=quay.io/ceph/ceph:v20, name=bold_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  1 09:50:10 np0005604375 systemd[1]: var-lib-containers-storage-overlay-26aae273b1f0f8666f1af297fa04772abc7da434b7d7659ca3e7107523f494e2-merged.mount: Deactivated successfully.
Feb  1 09:50:10 np0005604375 podman[75749]: 2026-02-01 14:50:10.247933136 +0000 UTC m=+0.729467863 container remove 59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d (image=quay.io/ceph/ceph:v20, name=bold_snyder, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:10 np0005604375 systemd[1]: libpod-conmon-59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d.scope: Deactivated successfully.
Feb  1 09:50:10 np0005604375 podman[75804]: 2026-02-01 14:50:10.327793736 +0000 UTC m=+0.057335534 container create 3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4 (image=quay.io/ceph/ceph:v20, name=elated_burnell, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  1 09:50:10 np0005604375 systemd[1]: Started libpod-conmon-3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4.scope.
Feb  1 09:50:10 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:10 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d4d439d2b2dbc263272ea835868907988623b50408798ea071ab2da1692375/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:10 np0005604375 podman[75804]: 2026-02-01 14:50:10.304097185 +0000 UTC m=+0.033639063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:10 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d4d439d2b2dbc263272ea835868907988623b50408798ea071ab2da1692375/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:10 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d4d439d2b2dbc263272ea835868907988623b50408798ea071ab2da1692375/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:10 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d4d439d2b2dbc263272ea835868907988623b50408798ea071ab2da1692375/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:10 np0005604375 podman[75804]: 2026-02-01 14:50:10.438666513 +0000 UTC m=+0.168208381 container init 3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4 (image=quay.io/ceph/ceph:v20, name=elated_burnell, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:10 np0005604375 podman[75804]: 2026-02-01 14:50:10.442226194 +0000 UTC m=+0.171768002 container start 3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4 (image=quay.io/ceph/ceph:v20, name=elated_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:10 np0005604375 podman[75804]: 2026-02-01 14:50:10.453644217 +0000 UTC m=+0.183186085 container attach 3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4 (image=quay.io/ceph/ceph:v20, name=elated_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 09:50:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb  1 09:50:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1846296928' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  1 09:50:10 np0005604375 elated_burnell[75820]: 
Feb  1 09:50:10 np0005604375 elated_burnell[75820]: [global]
Feb  1 09:50:10 np0005604375 elated_burnell[75820]: #011fsid = 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb  1 09:50:10 np0005604375 elated_burnell[75820]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb  1 09:50:10 np0005604375 elated_burnell[75820]: #011osd_crush_chooseleaf_type = 0
Feb  1 09:50:10 np0005604375 systemd[1]: libpod-3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4.scope: Deactivated successfully.
Feb  1 09:50:10 np0005604375 podman[75804]: 2026-02-01 14:50:10.865736968 +0000 UTC m=+0.595278776 container died 3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4 (image=quay.io/ceph/ceph:v20, name=elated_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  1 09:50:10 np0005604375 systemd[1]: var-lib-containers-storage-overlay-06d4d439d2b2dbc263272ea835868907988623b50408798ea071ab2da1692375-merged.mount: Deactivated successfully.
Feb  1 09:50:10 np0005604375 podman[75804]: 2026-02-01 14:50:10.904573437 +0000 UTC m=+0.634115205 container remove 3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4 (image=quay.io/ceph/ceph:v20, name=elated_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:10 np0005604375 systemd[1]: libpod-conmon-3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4.scope: Deactivated successfully.
Feb  1 09:50:10 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/1846296928' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  1 09:50:10 np0005604375 podman[75858]: 2026-02-01 14:50:10.955325833 +0000 UTC m=+0.036070532 container create 6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa (image=quay.io/ceph/ceph:v20, name=intelligent_beaver, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  1 09:50:10 np0005604375 systemd[1]: Started libpod-conmon-6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa.scope.
Feb  1 09:50:11 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a10726461299c3917ab90f0ac6239a0a25b53d509537cfe62951d942bcfbfc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a10726461299c3917ab90f0ac6239a0a25b53d509537cfe62951d942bcfbfc1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a10726461299c3917ab90f0ac6239a0a25b53d509537cfe62951d942bcfbfc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:11 np0005604375 podman[75858]: 2026-02-01 14:50:10.939111574 +0000 UTC m=+0.019856343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:11 np0005604375 podman[75858]: 2026-02-01 14:50:11.038014973 +0000 UTC m=+0.118759752 container init 6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa (image=quay.io/ceph/ceph:v20, name=intelligent_beaver, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:11 np0005604375 podman[75858]: 2026-02-01 14:50:11.04427769 +0000 UTC m=+0.125022409 container start 6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa (image=quay.io/ceph/ceph:v20, name=intelligent_beaver, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:11 np0005604375 podman[75858]: 2026-02-01 14:50:11.047382588 +0000 UTC m=+0.128127307 container attach 6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa (image=quay.io/ceph/ceph:v20, name=intelligent_beaver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 09:50:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Feb  1 09:50:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/266671715' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:11 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/266671715' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb  1 09:50:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/266671715' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn  1: '-n'
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn  2: 'mgr.compute-0.viosrg'
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn  3: '-f'
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn  4: '--setuser'
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn  5: 'ceph'
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn  6: '--setgroup'
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn  7: 'ceph'
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn  8: '--default-log-to-file=false'
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn  9: '--default-log-to-journald=true'
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn  10: '--default-log-to-stderr=false'
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Feb  1 09:50:11 np0005604375 ceph-mgr[75469]: mgr respawn  exe_path /proc/self/exe
Feb  1 09:50:11 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.viosrg(active, since 4s)
Feb  1 09:50:11 np0005604375 systemd[1]: libpod-6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa.scope: Deactivated successfully.
Feb  1 09:50:11 np0005604375 podman[75858]: 2026-02-01 14:50:11.976599842 +0000 UTC m=+1.057344521 container died 6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa (image=quay.io/ceph/ceph:v20, name=intelligent_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:12 np0005604375 systemd[1]: var-lib-containers-storage-overlay-6a10726461299c3917ab90f0ac6239a0a25b53d509537cfe62951d942bcfbfc1-merged.mount: Deactivated successfully.
Feb  1 09:50:12 np0005604375 podman[75858]: 2026-02-01 14:50:12.01610085 +0000 UTC m=+1.096845559 container remove 6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa (image=quay.io/ceph/ceph:v20, name=intelligent_beaver, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:12 np0005604375 systemd[1]: libpod-conmon-6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa.scope: Deactivated successfully.
Feb  1 09:50:12 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: ignoring --setuser ceph since I am not root
Feb  1 09:50:12 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: ignoring --setgroup ceph since I am not root
Feb  1 09:50:12 np0005604375 ceph-mgr[75469]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb  1 09:50:12 np0005604375 ceph-mgr[75469]: pidfile_write: ignore empty --pid-file
Feb  1 09:50:12 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'alerts'
Feb  1 09:50:12 np0005604375 podman[75912]: 2026-02-01 14:50:12.091152224 +0000 UTC m=+0.053408353 container create 185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a (image=quay.io/ceph/ceph:v20, name=peaceful_williamson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:12 np0005604375 systemd[1]: Started libpod-conmon-185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a.scope.
Feb  1 09:50:12 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52d222115b73629c4d1725d1d6b20dda7aac2a6c1264c1593d88411da66c861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52d222115b73629c4d1725d1d6b20dda7aac2a6c1264c1593d88411da66c861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52d222115b73629c4d1725d1d6b20dda7aac2a6c1264c1593d88411da66c861/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:12 np0005604375 podman[75912]: 2026-02-01 14:50:12.069364167 +0000 UTC m=+0.031620336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:12 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'balancer'
Feb  1 09:50:12 np0005604375 podman[75912]: 2026-02-01 14:50:12.168470571 +0000 UTC m=+0.130726710 container init 185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a (image=quay.io/ceph/ceph:v20, name=peaceful_williamson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:12 np0005604375 podman[75912]: 2026-02-01 14:50:12.173673179 +0000 UTC m=+0.135929298 container start 185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a (image=quay.io/ceph/ceph:v20, name=peaceful_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  1 09:50:12 np0005604375 podman[75912]: 2026-02-01 14:50:12.176798597 +0000 UTC m=+0.139054716 container attach 185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a (image=quay.io/ceph/ceph:v20, name=peaceful_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  1 09:50:12 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'cephadm'
Feb  1 09:50:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb  1 09:50:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2230403667' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb  1 09:50:12 np0005604375 peaceful_williamson[75948]: {
Feb  1 09:50:12 np0005604375 peaceful_williamson[75948]:    "epoch": 5,
Feb  1 09:50:12 np0005604375 peaceful_williamson[75948]:    "available": true,
Feb  1 09:50:12 np0005604375 peaceful_williamson[75948]:    "active_name": "compute-0.viosrg",
Feb  1 09:50:12 np0005604375 peaceful_williamson[75948]:    "num_standby": 0
Feb  1 09:50:12 np0005604375 peaceful_williamson[75948]: }
Feb  1 09:50:12 np0005604375 systemd[1]: libpod-185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a.scope: Deactivated successfully.
Feb  1 09:50:12 np0005604375 podman[75912]: 2026-02-01 14:50:12.635589809 +0000 UTC m=+0.597845968 container died 185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a (image=quay.io/ceph/ceph:v20, name=peaceful_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:12 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f52d222115b73629c4d1725d1d6b20dda7aac2a6c1264c1593d88411da66c861-merged.mount: Deactivated successfully.
Feb  1 09:50:12 np0005604375 podman[75912]: 2026-02-01 14:50:12.665948979 +0000 UTC m=+0.628205098 container remove 185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a (image=quay.io/ceph/ceph:v20, name=peaceful_williamson, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  1 09:50:12 np0005604375 systemd[1]: libpod-conmon-185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a.scope: Deactivated successfully.
Feb  1 09:50:12 np0005604375 podman[75996]: 2026-02-01 14:50:12.737857933 +0000 UTC m=+0.052471875 container create f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441 (image=quay.io/ceph/ceph:v20, name=dreamy_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:12 np0005604375 systemd[1]: Started libpod-conmon-f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441.scope.
Feb  1 09:50:12 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83461617f47c3d4fa89a34d8d107ab9f6a304424a37c106255ce0771e6c3faf8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83461617f47c3d4fa89a34d8d107ab9f6a304424a37c106255ce0771e6c3faf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83461617f47c3d4fa89a34d8d107ab9f6a304424a37c106255ce0771e6c3faf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:12 np0005604375 podman[75996]: 2026-02-01 14:50:12.819265547 +0000 UTC m=+0.133879489 container init f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441 (image=quay.io/ceph/ceph:v20, name=dreamy_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:12 np0005604375 podman[75996]: 2026-02-01 14:50:12.716436397 +0000 UTC m=+0.031050399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:12 np0005604375 podman[75996]: 2026-02-01 14:50:12.825263427 +0000 UTC m=+0.139877349 container start f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441 (image=quay.io/ceph/ceph:v20, name=dreamy_haslett, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  1 09:50:12 np0005604375 podman[75996]: 2026-02-01 14:50:12.828766946 +0000 UTC m=+0.143380958 container attach f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441 (image=quay.io/ceph/ceph:v20, name=dreamy_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  1 09:50:12 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'crash'
Feb  1 09:50:12 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/266671715' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb  1 09:50:12 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'dashboard'
Feb  1 09:50:13 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'devicehealth'
Feb  1 09:50:13 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'diskprediction_local'
Feb  1 09:50:13 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  1 09:50:13 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  1 09:50:13 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]:  from numpy import show_config as show_numpy_config
Feb  1 09:50:13 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'influx'
Feb  1 09:50:13 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'insights'
Feb  1 09:50:13 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'iostat'
Feb  1 09:50:13 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'k8sevents'
Feb  1 09:50:14 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'localpool'
Feb  1 09:50:14 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'mds_autoscaler'
Feb  1 09:50:14 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'mirroring'
Feb  1 09:50:14 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'nfs'
Feb  1 09:50:14 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'orchestrator'
Feb  1 09:50:15 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'osd_perf_query'
Feb  1 09:50:15 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'osd_support'
Feb  1 09:50:15 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'pg_autoscaler'
Feb  1 09:50:15 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'progress'
Feb  1 09:50:15 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'prometheus'
Feb  1 09:50:15 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'rbd_support'
Feb  1 09:50:15 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'rgw'
Feb  1 09:50:15 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'rook'
Feb  1 09:50:16 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'selftest'
Feb  1 09:50:16 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'smb'
Feb  1 09:50:16 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'snap_schedule'
Feb  1 09:50:16 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'stats'
Feb  1 09:50:16 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'status'
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'telegraf'
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'telemetry'
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'test_orchestrator'
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: mgr[py] Loading python module 'volumes'
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : Active manager daemon compute-0.viosrg restarted
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.viosrg
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: ms_deliver_dispatch: unhandled message 0x55f8f16fc000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: mgr handle_mgr_map Activating!
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.viosrg(active, starting, since 0.0228883s)
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: mgr handle_mgr_map I am now activating
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.viosrg", "id": "compute-0.viosrg"} v 0)
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mgr metadata", "who": "compute-0.viosrg", "id": "compute-0.viosrg"} : dispatch
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mds metadata"} : dispatch
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).mds e1 all = 1
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata"} : dispatch
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata"} : dispatch
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: balancer
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : Manager daemon compute-0.viosrg is now available
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Starting
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:50:17
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 09:50:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] No pools available
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: Active manager daemon compute-0.viosrg restarted
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: Activating manager daemon compute-0.viosrg
Feb  1 09:50:17 np0005604375 ceph-mon[75179]: Manager daemon compute-0.viosrg is now available
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: cephadm
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: crash
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: devicehealth
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [devicehealth INFO root] Starting
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: iostat
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: nfs
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: orchestrator
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: pg_autoscaler
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: progress
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [progress INFO root] Loading...
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [progress INFO root] No stored events to load
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [progress INFO root] Loaded [] historic events
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [progress INFO root] Loaded OSDMap, ready.
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] recovery thread starting
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] starting setup
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: rbd_support
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: status
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: telemetry
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/mirror_snapshot_schedule"} v 0)
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/mirror_snapshot_schedule"} : dispatch
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] PerfHandler: starting
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TaskHandler: starting
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/trash_purge_schedule"} v 0)
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/trash_purge_schedule"} : dispatch
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] setup complete
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: mgr load Constructed class from module: volumes
Feb  1 09:50:18 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.viosrg(active, since 1.0323s)
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Feb  1 09:50:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Feb  1 09:50:18 np0005604375 dreamy_haslett[76013]: {
Feb  1 09:50:18 np0005604375 dreamy_haslett[76013]:    "mgrmap_epoch": 7,
Feb  1 09:50:18 np0005604375 dreamy_haslett[76013]:    "initialized": true
Feb  1 09:50:18 np0005604375 dreamy_haslett[76013]: }
Feb  1 09:50:18 np0005604375 systemd[1]: libpod-f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441.scope: Deactivated successfully.
Feb  1 09:50:18 np0005604375 podman[76148]: 2026-02-01 14:50:18.787286382 +0000 UTC m=+0.038803549 container died f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441 (image=quay.io/ceph/ceph:v20, name=dreamy_haslett, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:18 np0005604375 systemd[1]: var-lib-containers-storage-overlay-83461617f47c3d4fa89a34d8d107ab9f6a304424a37c106255ce0771e6c3faf8-merged.mount: Deactivated successfully.
Feb  1 09:50:18 np0005604375 podman[76148]: 2026-02-01 14:50:18.825467103 +0000 UTC m=+0.076984270 container remove f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441 (image=quay.io/ceph/ceph:v20, name=dreamy_haslett, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  1 09:50:18 np0005604375 systemd[1]: libpod-conmon-f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441.scope: Deactivated successfully.
Feb  1 09:50:18 np0005604375 podman[76162]: 2026-02-01 14:50:18.906226828 +0000 UTC m=+0.055750188 container create 1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f (image=quay.io/ceph/ceph:v20, name=blissful_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Feb  1 09:50:18 np0005604375 systemd[1]: Started libpod-conmon-1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f.scope.
Feb  1 09:50:18 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682979f9801aa01d60ab876d461b840724401844365c0ea86ab7b070292fe4aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682979f9801aa01d60ab876d461b840724401844365c0ea86ab7b070292fe4aa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682979f9801aa01d60ab876d461b840724401844365c0ea86ab7b070292fe4aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:18 np0005604375 podman[76162]: 2026-02-01 14:50:18.881343304 +0000 UTC m=+0.030866714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:18 np0005604375 podman[76162]: 2026-02-01 14:50:18.996350418 +0000 UTC m=+0.145873748 container init 1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f (image=quay.io/ceph/ceph:v20, name=blissful_leakey, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  1 09:50:19 np0005604375 podman[76162]: 2026-02-01 14:50:19.002539664 +0000 UTC m=+0.152062994 container start 1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f (image=quay.io/ceph/ceph:v20, name=blissful_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:19 np0005604375 podman[76162]: 2026-02-01 14:50:19.005889898 +0000 UTC m=+0.155413258 container attach 1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f (image=quay.io/ceph/ceph:v20, name=blissful_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:19 np0005604375 ceph-mon[75179]: Found migration_current of "None". Setting to last migration.
Feb  1 09:50:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/mirror_snapshot_schedule"} : dispatch
Feb  1 09:50:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/trash_purge_schedule"} : dispatch
Feb  1 09:50:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Feb  1 09:50:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2570491495' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb  1 09:50:19 np0005604375 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  1 09:50:19 np0005604375 ceph-mgr[75469]: [cephadm INFO cherrypy.error] [01/Feb/2026:14:50:19] ENGINE Bus STARTING
Feb  1 09:50:19 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : [01/Feb/2026:14:50:19] ENGINE Bus STARTING
Feb  1 09:50:19 np0005604375 ceph-mgr[75469]: [cephadm INFO cherrypy.error] [01/Feb/2026:14:50:19] ENGINE Serving on https://192.168.122.100:7150
Feb  1 09:50:19 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : [01/Feb/2026:14:50:19] ENGINE Serving on https://192.168.122.100:7150
Feb  1 09:50:19 np0005604375 ceph-mgr[75469]: [cephadm INFO cherrypy.error] [01/Feb/2026:14:50:19] ENGINE Client ('192.168.122.100', 46906) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  1 09:50:19 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : [01/Feb/2026:14:50:19] ENGINE Client ('192.168.122.100', 46906) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  1 09:50:19 np0005604375 ceph-mgr[75469]: [cephadm INFO cherrypy.error] [01/Feb/2026:14:50:19] ENGINE Serving on http://192.168.122.100:8765
Feb  1 09:50:19 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : [01/Feb/2026:14:50:19] ENGINE Serving on http://192.168.122.100:8765
Feb  1 09:50:19 np0005604375 ceph-mgr[75469]: [cephadm INFO cherrypy.error] [01/Feb/2026:14:50:19] ENGINE Bus STARTED
Feb  1 09:50:19 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : [01/Feb/2026:14:50:19] ENGINE Bus STARTED
Feb  1 09:50:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  1 09:50:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  1 09:50:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019899420 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:50:20 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2570491495' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb  1 09:50:20 np0005604375 ceph-mon[75179]: [01/Feb/2026:14:50:19] ENGINE Bus STARTING
Feb  1 09:50:20 np0005604375 ceph-mon[75179]: [01/Feb/2026:14:50:19] ENGINE Serving on https://192.168.122.100:7150
Feb  1 09:50:20 np0005604375 ceph-mon[75179]: [01/Feb/2026:14:50:19] ENGINE Client ('192.168.122.100', 46906) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  1 09:50:20 np0005604375 ceph-mon[75179]: [01/Feb/2026:14:50:19] ENGINE Serving on http://192.168.122.100:8765
Feb  1 09:50:20 np0005604375 ceph-mon[75179]: [01/Feb/2026:14:50:19] ENGINE Bus STARTED
Feb  1 09:50:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2570491495' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb  1 09:50:20 np0005604375 blissful_leakey[76178]: module 'orchestrator' is already enabled (always-on)
Feb  1 09:50:20 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.viosrg(active, since 2s)
Feb  1 09:50:20 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:20 np0005604375 systemd[1]: libpod-1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f.scope: Deactivated successfully.
Feb  1 09:50:20 np0005604375 podman[76162]: 2026-02-01 14:50:20.442947622 +0000 UTC m=+1.592471002 container died 1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f (image=quay.io/ceph/ceph:v20, name=blissful_leakey, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:20 np0005604375 systemd[1]: var-lib-containers-storage-overlay-682979f9801aa01d60ab876d461b840724401844365c0ea86ab7b070292fe4aa-merged.mount: Deactivated successfully.
Feb  1 09:50:20 np0005604375 podman[76162]: 2026-02-01 14:50:20.483820738 +0000 UTC m=+1.633344098 container remove 1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f (image=quay.io/ceph/ceph:v20, name=blissful_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  1 09:50:20 np0005604375 systemd[1]: libpod-conmon-1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f.scope: Deactivated successfully.
Feb  1 09:50:20 np0005604375 podman[76239]: 2026-02-01 14:50:20.551410891 +0000 UTC m=+0.047195837 container create 2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5 (image=quay.io/ceph/ceph:v20, name=inspiring_haslett, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  1 09:50:20 np0005604375 systemd[1]: Started libpod-conmon-2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5.scope.
Feb  1 09:50:20 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:20 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca482dba53fa5134ab5afcb87716f065b3d8e4488e7cdcffbd3c8ec96c468785/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:20 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca482dba53fa5134ab5afcb87716f065b3d8e4488e7cdcffbd3c8ec96c468785/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:20 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca482dba53fa5134ab5afcb87716f065b3d8e4488e7cdcffbd3c8ec96c468785/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:20 np0005604375 podman[76239]: 2026-02-01 14:50:20.617188962 +0000 UTC m=+0.112973938 container init 2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5 (image=quay.io/ceph/ceph:v20, name=inspiring_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  1 09:50:20 np0005604375 podman[76239]: 2026-02-01 14:50:20.622159353 +0000 UTC m=+0.117944319 container start 2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5 (image=quay.io/ceph/ceph:v20, name=inspiring_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 09:50:20 np0005604375 podman[76239]: 2026-02-01 14:50:20.62593984 +0000 UTC m=+0.121724756 container attach 2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5 (image=quay.io/ceph/ceph:v20, name=inspiring_haslett, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  1 09:50:20 np0005604375 podman[76239]: 2026-02-01 14:50:20.534843552 +0000 UTC m=+0.030628478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:21 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:50:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Feb  1 09:50:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:21 np0005604375 systemd[1]: libpod-2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5.scope: Deactivated successfully.
Feb  1 09:50:21 np0005604375 podman[76239]: 2026-02-01 14:50:21.033816452 +0000 UTC m=+0.529601398 container died 2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5 (image=quay.io/ceph/ceph:v20, name=inspiring_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  1 09:50:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  1 09:50:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  1 09:50:21 np0005604375 systemd[1]: var-lib-containers-storage-overlay-ca482dba53fa5134ab5afcb87716f065b3d8e4488e7cdcffbd3c8ec96c468785-merged.mount: Deactivated successfully.
Feb  1 09:50:21 np0005604375 podman[76239]: 2026-02-01 14:50:21.06840197 +0000 UTC m=+0.564186906 container remove 2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5 (image=quay.io/ceph/ceph:v20, name=inspiring_haslett, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  1 09:50:21 np0005604375 systemd[1]: libpod-conmon-2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5.scope: Deactivated successfully.
Feb  1 09:50:21 np0005604375 podman[76293]: 2026-02-01 14:50:21.131800224 +0000 UTC m=+0.047935157 container create 8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9 (image=quay.io/ceph/ceph:v20, name=dazzling_ritchie, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 09:50:21 np0005604375 systemd[1]: Started libpod-conmon-8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9.scope.
Feb  1 09:50:21 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426bcd97ce9acb78f0906da95f36152e19556d4a68c0707cd5956c79f2d4d37a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426bcd97ce9acb78f0906da95f36152e19556d4a68c0707cd5956c79f2d4d37a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426bcd97ce9acb78f0906da95f36152e19556d4a68c0707cd5956c79f2d4d37a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:21 np0005604375 podman[76293]: 2026-02-01 14:50:21.111959023 +0000 UTC m=+0.028093966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:21 np0005604375 podman[76293]: 2026-02-01 14:50:21.213124726 +0000 UTC m=+0.129259669 container init 8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9 (image=quay.io/ceph/ceph:v20, name=dazzling_ritchie, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:21 np0005604375 podman[76293]: 2026-02-01 14:50:21.219852416 +0000 UTC m=+0.135987369 container start 8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9 (image=quay.io/ceph/ceph:v20, name=dazzling_ritchie, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:21 np0005604375 podman[76293]: 2026-02-01 14:50:21.224212369 +0000 UTC m=+0.140347302 container attach 8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9 (image=quay.io/ceph/ceph:v20, name=dazzling_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  1 09:50:21 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2570491495' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb  1 09:50:21 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:21 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:50:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Feb  1 09:50:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:21 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Set ssh ssh_user
Feb  1 09:50:21 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Feb  1 09:50:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Feb  1 09:50:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:21 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Set ssh ssh_config
Feb  1 09:50:21 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Feb  1 09:50:21 np0005604375 ceph-mgr[75469]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Feb  1 09:50:21 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Feb  1 09:50:21 np0005604375 dazzling_ritchie[76310]: ssh user set to ceph-admin. sudo will be used
Feb  1 09:50:21 np0005604375 systemd[1]: libpod-8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9.scope: Deactivated successfully.
Feb  1 09:50:21 np0005604375 podman[76293]: 2026-02-01 14:50:21.633869111 +0000 UTC m=+0.550004054 container died 8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9 (image=quay.io/ceph/ceph:v20, name=dazzling_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Feb  1 09:50:21 np0005604375 systemd[1]: var-lib-containers-storage-overlay-426bcd97ce9acb78f0906da95f36152e19556d4a68c0707cd5956c79f2d4d37a-merged.mount: Deactivated successfully.
Feb  1 09:50:21 np0005604375 podman[76293]: 2026-02-01 14:50:21.6698519 +0000 UTC m=+0.585986823 container remove 8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9 (image=quay.io/ceph/ceph:v20, name=dazzling_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:21 np0005604375 systemd[1]: libpod-conmon-8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9.scope: Deactivated successfully.
Feb  1 09:50:21 np0005604375 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  1 09:50:21 np0005604375 podman[76348]: 2026-02-01 14:50:21.725522255 +0000 UTC m=+0.043389719 container create a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b (image=quay.io/ceph/ceph:v20, name=frosty_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  1 09:50:21 np0005604375 systemd[1]: Started libpod-conmon-a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b.scope.
Feb  1 09:50:21 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5cd8dddc92a006f20f3d2ac0215792621af1e83f868af784d599d275859d5/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5cd8dddc92a006f20f3d2ac0215792621af1e83f868af784d599d275859d5/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5cd8dddc92a006f20f3d2ac0215792621af1e83f868af784d599d275859d5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5cd8dddc92a006f20f3d2ac0215792621af1e83f868af784d599d275859d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5cd8dddc92a006f20f3d2ac0215792621af1e83f868af784d599d275859d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:21 np0005604375 podman[76348]: 2026-02-01 14:50:21.797599434 +0000 UTC m=+0.115466908 container init a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b (image=quay.io/ceph/ceph:v20, name=frosty_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:21 np0005604375 podman[76348]: 2026-02-01 14:50:21.701441564 +0000 UTC m=+0.019309078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:21 np0005604375 podman[76348]: 2026-02-01 14:50:21.812464985 +0000 UTC m=+0.130332439 container start a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b (image=quay.io/ceph/ceph:v20, name=frosty_meitner, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  1 09:50:21 np0005604375 podman[76348]: 2026-02-01 14:50:21.816585722 +0000 UTC m=+0.134453236 container attach a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b (image=quay.io/ceph/ceph:v20, name=frosty_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:50:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Feb  1 09:50:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:22 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Set ssh ssh_identity_key
Feb  1 09:50:22 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Feb  1 09:50:22 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Set ssh private key
Feb  1 09:50:22 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Set ssh private key
Feb  1 09:50:22 np0005604375 systemd[1]: libpod-a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b.scope: Deactivated successfully.
Feb  1 09:50:22 np0005604375 podman[76348]: 2026-02-01 14:50:22.281844407 +0000 UTC m=+0.599711841 container died a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b (image=quay.io/ceph/ceph:v20, name=frosty_meitner, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  1 09:50:22 np0005604375 systemd[1]: var-lib-containers-storage-overlay-83f5cd8dddc92a006f20f3d2ac0215792621af1e83f868af784d599d275859d5-merged.mount: Deactivated successfully.
Feb  1 09:50:22 np0005604375 podman[76348]: 2026-02-01 14:50:22.319281957 +0000 UTC m=+0.637149411 container remove a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b (image=quay.io/ceph/ceph:v20, name=frosty_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 09:50:22 np0005604375 systemd[1]: libpod-conmon-a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b.scope: Deactivated successfully.
Feb  1 09:50:22 np0005604375 podman[76403]: 2026-02-01 14:50:22.38477036 +0000 UTC m=+0.047014222 container create 031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614 (image=quay.io/ceph/ceph:v20, name=heuristic_aryabhata, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:22 np0005604375 systemd[1]: Started libpod-conmon-031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614.scope.
Feb  1 09:50:22 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:22 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:22 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650ba55452133fba3ad63b2a95e67a601583e502215390085930fe7bf4fa127e/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:22 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650ba55452133fba3ad63b2a95e67a601583e502215390085930fe7bf4fa127e/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:22 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650ba55452133fba3ad63b2a95e67a601583e502215390085930fe7bf4fa127e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:22 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650ba55452133fba3ad63b2a95e67a601583e502215390085930fe7bf4fa127e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:22 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650ba55452133fba3ad63b2a95e67a601583e502215390085930fe7bf4fa127e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:22 np0005604375 podman[76403]: 2026-02-01 14:50:22.368627473 +0000 UTC m=+0.030871365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:22 np0005604375 podman[76403]: 2026-02-01 14:50:22.463662612 +0000 UTC m=+0.125906594 container init 031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614 (image=quay.io/ceph/ceph:v20, name=heuristic_aryabhata, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 09:50:22 np0005604375 podman[76403]: 2026-02-01 14:50:22.477795062 +0000 UTC m=+0.140038954 container start 031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614 (image=quay.io/ceph/ceph:v20, name=heuristic_aryabhata, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 09:50:22 np0005604375 podman[76403]: 2026-02-01 14:50:22.481887978 +0000 UTC m=+0.144131940 container attach 031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614 (image=quay.io/ceph/ceph:v20, name=heuristic_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  1 09:50:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:22 np0005604375 ceph-mon[75179]: Set ssh ssh_user
Feb  1 09:50:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:22 np0005604375 ceph-mon[75179]: Set ssh ssh_config
Feb  1 09:50:22 np0005604375 ceph-mon[75179]: ssh user set to ceph-admin. sudo will be used
Feb  1 09:50:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:50:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Feb  1 09:50:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:22 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Set ssh ssh_identity_pub
Feb  1 09:50:22 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Feb  1 09:50:22 np0005604375 systemd[1]: libpod-031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614.scope: Deactivated successfully.
Feb  1 09:50:22 np0005604375 podman[76403]: 2026-02-01 14:50:22.913043388 +0000 UTC m=+0.575287280 container died 031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614 (image=quay.io/ceph/ceph:v20, name=heuristic_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:22 np0005604375 systemd[1]: var-lib-containers-storage-overlay-650ba55452133fba3ad63b2a95e67a601583e502215390085930fe7bf4fa127e-merged.mount: Deactivated successfully.
Feb  1 09:50:22 np0005604375 podman[76403]: 2026-02-01 14:50:22.957081314 +0000 UTC m=+0.619325206 container remove 031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614 (image=quay.io/ceph/ceph:v20, name=heuristic_aryabhata, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 09:50:22 np0005604375 systemd[1]: libpod-conmon-031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614.scope: Deactivated successfully.
Feb  1 09:50:23 np0005604375 podman[76458]: 2026-02-01 14:50:23.033710833 +0000 UTC m=+0.055704077 container create 3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7 (image=quay.io/ceph/ceph:v20, name=blissful_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:23 np0005604375 systemd[1]: Started libpod-conmon-3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7.scope.
Feb  1 09:50:23 np0005604375 podman[76458]: 2026-02-01 14:50:23.009768695 +0000 UTC m=+0.031761999 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:23 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0803cb24b16fcee0557fbb20e0d60b9f9c6f825c2fffaf9c74dfcbcd1e27bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0803cb24b16fcee0557fbb20e0d60b9f9c6f825c2fffaf9c74dfcbcd1e27bf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0803cb24b16fcee0557fbb20e0d60b9f9c6f825c2fffaf9c74dfcbcd1e27bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:23 np0005604375 podman[76458]: 2026-02-01 14:50:23.133746644 +0000 UTC m=+0.155739968 container init 3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7 (image=quay.io/ceph/ceph:v20, name=blissful_ishizaka, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:23 np0005604375 podman[76458]: 2026-02-01 14:50:23.140849975 +0000 UTC m=+0.162843219 container start 3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7 (image=quay.io/ceph/ceph:v20, name=blissful_ishizaka, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:23 np0005604375 podman[76458]: 2026-02-01 14:50:23.145214318 +0000 UTC m=+0.167207632 container attach 3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7 (image=quay.io/ceph/ceph:v20, name=blissful_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  1 09:50:23 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:50:23 np0005604375 blissful_ishizaka[76474]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuc62woYQ6HfDdFdKxH9p2YvJ2Cu5z79VhJzSOBo96c05tD8Q91qYPpnXfDIEo83mJltB9P6bcxmVNw1QVUUGbTbW0drCaQkf+KnajOtuJ1H+96zTyvUYiCNXUxdYQ4vrlju8lrI5XjvOA066ddPwBuJ8t12jQk26l6X0LfCUirqvXIiXcpVvBNUkxDLulQwGUy2yIkNBevRvbJskFNHqcEy4sOkLBDYXSaPVtrmzuNRDBdqm6U6xfWmHQXiF4gVuOKNRms/+KUhCUY/dDWHj1jIJVmrTMVZhEQZgyhAXbb4JDMK9/NMCalRhh3f6UlBxmcQgSsNmGk+UgD+w0jbODdYMec0vOXZOYRnClALtuxqNe/enT9GyKc314/xWjLRumtOqPjjz+NtYPr7tAZVAlPENDlLhvzKVycefF4CPAvaPcqTcMWtfXYgGqcOQj4vwWaRndS9s95sQPLaIeJ8i+ZMggfF+tMpw9Zm0boto6XPwjw4ZWXu9etZ2GDbMSfAE= zuul@controller
Feb  1 09:50:23 np0005604375 systemd[1]: libpod-3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7.scope: Deactivated successfully.
Feb  1 09:50:23 np0005604375 podman[76458]: 2026-02-01 14:50:23.600730837 +0000 UTC m=+0.622724091 container died 3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7 (image=quay.io/ceph/ceph:v20, name=blissful_ishizaka, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  1 09:50:23 np0005604375 systemd[1]: var-lib-containers-storage-overlay-9c0803cb24b16fcee0557fbb20e0d60b9f9c6f825c2fffaf9c74dfcbcd1e27bf-merged.mount: Deactivated successfully.
Feb  1 09:50:23 np0005604375 podman[76458]: 2026-02-01 14:50:23.646559744 +0000 UTC m=+0.668552968 container remove 3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7 (image=quay.io/ceph/ceph:v20, name=blissful_ishizaka, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Feb  1 09:50:23 np0005604375 systemd[1]: libpod-conmon-3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7.scope: Deactivated successfully.
Feb  1 09:50:23 np0005604375 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  1 09:50:23 np0005604375 podman[76512]: 2026-02-01 14:50:23.725138397 +0000 UTC m=+0.055652096 container create 57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d (image=quay.io/ceph/ceph:v20, name=loving_pare, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  1 09:50:23 np0005604375 systemd[1]: Started libpod-conmon-57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d.scope.
Feb  1 09:50:23 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/031aba11ae3a9303845c3e2b99857adf84aa17e773386a5ff1935bb56fcf95ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/031aba11ae3a9303845c3e2b99857adf84aa17e773386a5ff1935bb56fcf95ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/031aba11ae3a9303845c3e2b99857adf84aa17e773386a5ff1935bb56fcf95ac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:23 np0005604375 podman[76512]: 2026-02-01 14:50:23.700547291 +0000 UTC m=+0.031061040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:23 np0005604375 podman[76512]: 2026-02-01 14:50:23.804065671 +0000 UTC m=+0.134579350 container init 57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d (image=quay.io/ceph/ceph:v20, name=loving_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:23 np0005604375 podman[76512]: 2026-02-01 14:50:23.811082149 +0000 UTC m=+0.141595808 container start 57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d (image=quay.io/ceph/ceph:v20, name=loving_pare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:23 np0005604375 podman[76512]: 2026-02-01 14:50:23.81464527 +0000 UTC m=+0.145158959 container attach 57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d (image=quay.io/ceph/ceph:v20, name=loving_pare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  1 09:50:23 np0005604375 ceph-mon[75179]: Set ssh ssh_identity_key
Feb  1 09:50:23 np0005604375 ceph-mon[75179]: Set ssh private key
Feb  1 09:50:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:23 np0005604375 ceph-mon[75179]: Set ssh ssh_identity_pub
Feb  1 09:50:24 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:50:24 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:24 np0005604375 systemd-logind[786]: New session 20 of user ceph-admin.
Feb  1 09:50:24 np0005604375 systemd[1]: Created slice User Slice of UID 42477.
Feb  1 09:50:24 np0005604375 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb  1 09:50:24 np0005604375 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb  1 09:50:24 np0005604375 systemd[1]: Starting User Manager for UID 42477...
Feb  1 09:50:24 np0005604375 systemd[76558]: Queued start job for default target Main User Target.
Feb  1 09:50:24 np0005604375 systemd[76558]: Created slice User Application Slice.
Feb  1 09:50:24 np0005604375 systemd[76558]: Started Mark boot as successful after the user session has run 2 minutes.
Feb  1 09:50:24 np0005604375 systemd[76558]: Started Daily Cleanup of User's Temporary Directories.
Feb  1 09:50:24 np0005604375 systemd[76558]: Reached target Paths.
Feb  1 09:50:24 np0005604375 systemd[76558]: Reached target Timers.
Feb  1 09:50:24 np0005604375 systemd[76558]: Starting D-Bus User Message Bus Socket...
Feb  1 09:50:24 np0005604375 systemd[76558]: Starting Create User's Volatile Files and Directories...
Feb  1 09:50:24 np0005604375 systemd[76558]: Listening on D-Bus User Message Bus Socket.
Feb  1 09:50:24 np0005604375 systemd[76558]: Reached target Sockets.
Feb  1 09:50:24 np0005604375 systemd-logind[786]: New session 22 of user ceph-admin.
Feb  1 09:50:24 np0005604375 systemd[76558]: Finished Create User's Volatile Files and Directories.
Feb  1 09:50:24 np0005604375 systemd[76558]: Reached target Basic System.
Feb  1 09:50:24 np0005604375 systemd[76558]: Reached target Main User Target.
Feb  1 09:50:24 np0005604375 systemd[76558]: Startup finished in 143ms.
Feb  1 09:50:24 np0005604375 systemd[1]: Started User Manager for UID 42477.
Feb  1 09:50:24 np0005604375 systemd[1]: Started Session 20 of User ceph-admin.
Feb  1 09:50:24 np0005604375 systemd[1]: Started Session 22 of User ceph-admin.
Feb  1 09:50:25 np0005604375 systemd-logind[786]: New session 23 of user ceph-admin.
Feb  1 09:50:25 np0005604375 systemd[1]: Started Session 23 of User ceph-admin.
Feb  1 09:50:25 np0005604375 systemd-logind[786]: New session 24 of user ceph-admin.
Feb  1 09:50:25 np0005604375 systemd[1]: Started Session 24 of User ceph-admin.
Feb  1 09:50:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052558 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:50:25 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Feb  1 09:50:25 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Feb  1 09:50:25 np0005604375 systemd-logind[786]: New session 25 of user ceph-admin.
Feb  1 09:50:25 np0005604375 systemd[1]: Started Session 25 of User ceph-admin.
Feb  1 09:50:25 np0005604375 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  1 09:50:25 np0005604375 systemd-logind[786]: New session 26 of user ceph-admin.
Feb  1 09:50:26 np0005604375 systemd[1]: Started Session 26 of User ceph-admin.
Feb  1 09:50:26 np0005604375 systemd-logind[786]: New session 27 of user ceph-admin.
Feb  1 09:50:26 np0005604375 systemd[1]: Started Session 27 of User ceph-admin.
Feb  1 09:50:26 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:26 np0005604375 systemd-logind[786]: New session 28 of user ceph-admin.
Feb  1 09:50:26 np0005604375 systemd[1]: Started Session 28 of User ceph-admin.
Feb  1 09:50:26 np0005604375 ceph-mon[75179]: Deploying cephadm binary to compute-0
Feb  1 09:50:27 np0005604375 systemd-logind[786]: New session 29 of user ceph-admin.
Feb  1 09:50:27 np0005604375 systemd[1]: Started Session 29 of User ceph-admin.
Feb  1 09:50:27 np0005604375 systemd-logind[786]: New session 30 of user ceph-admin.
Feb  1 09:50:27 np0005604375 systemd[1]: Started Session 30 of User ceph-admin.
Feb  1 09:50:27 np0005604375 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  1 09:50:28 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:28 np0005604375 systemd-logind[786]: New session 31 of user ceph-admin.
Feb  1 09:50:28 np0005604375 systemd[1]: Started Session 31 of User ceph-admin.
Feb  1 09:50:29 np0005604375 systemd-logind[786]: New session 32 of user ceph-admin.
Feb  1 09:50:29 np0005604375 systemd[1]: Started Session 32 of User ceph-admin.
Feb  1 09:50:29 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  1 09:50:29 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:29 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Added host compute-0
Feb  1 09:50:29 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Added host compute-0
Feb  1 09:50:29 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  1 09:50:29 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  1 09:50:29 np0005604375 loving_pare[76528]: Added host 'compute-0' with addr '192.168.122.100'
Feb  1 09:50:29 np0005604375 systemd[1]: libpod-57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d.scope: Deactivated successfully.
Feb  1 09:50:29 np0005604375 podman[76512]: 2026-02-01 14:50:29.560028057 +0000 UTC m=+5.890541736 container died 57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d (image=quay.io/ceph/ceph:v20, name=loving_pare, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:29 np0005604375 systemd[1]: var-lib-containers-storage-overlay-031aba11ae3a9303845c3e2b99857adf84aa17e773386a5ff1935bb56fcf95ac-merged.mount: Deactivated successfully.
Feb  1 09:50:29 np0005604375 podman[76512]: 2026-02-01 14:50:29.617805952 +0000 UTC m=+5.948319641 container remove 57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d (image=quay.io/ceph/ceph:v20, name=loving_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  1 09:50:29 np0005604375 systemd[1]: libpod-conmon-57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d.scope: Deactivated successfully.
Feb  1 09:50:29 np0005604375 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  1 09:50:29 np0005604375 podman[76953]: 2026-02-01 14:50:29.701705876 +0000 UTC m=+0.056129409 container create 495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7 (image=quay.io/ceph/ceph:v20, name=flamboyant_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  1 09:50:29 np0005604375 systemd[1]: Started libpod-conmon-495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7.scope.
Feb  1 09:50:29 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:29 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e178b1d42057af89d9199777cc210258582f2282ee4040de16c952b5d1c29a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:29 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e178b1d42057af89d9199777cc210258582f2282ee4040de16c952b5d1c29a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:29 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e178b1d42057af89d9199777cc210258582f2282ee4040de16c952b5d1c29a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:29 np0005604375 podman[76953]: 2026-02-01 14:50:29.681039701 +0000 UTC m=+0.035463234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:29 np0005604375 podman[76953]: 2026-02-01 14:50:29.790067316 +0000 UTC m=+0.144490889 container init 495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7 (image=quay.io/ceph/ceph:v20, name=flamboyant_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  1 09:50:29 np0005604375 podman[76953]: 2026-02-01 14:50:29.796999573 +0000 UTC m=+0.151423136 container start 495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7 (image=quay.io/ceph/ceph:v20, name=flamboyant_feistel, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:29 np0005604375 podman[76953]: 2026-02-01 14:50:29.801406437 +0000 UTC m=+0.155830040 container attach 495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7 (image=quay.io/ceph/ceph:v20, name=flamboyant_feistel, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  1 09:50:30 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:50:30 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Saving service mon spec with placement count:5
Feb  1 09:50:30 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Feb  1 09:50:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  1 09:50:30 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:30 np0005604375 flamboyant_feistel[76989]: Scheduled mon update...
Feb  1 09:50:30 np0005604375 systemd[1]: libpod-495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7.scope: Deactivated successfully.
Feb  1 09:50:30 np0005604375 podman[76953]: 2026-02-01 14:50:30.222890594 +0000 UTC m=+0.577314147 container died 495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7 (image=quay.io/ceph/ceph:v20, name=flamboyant_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:30 np0005604375 systemd[1]: var-lib-containers-storage-overlay-96e178b1d42057af89d9199777cc210258582f2282ee4040de16c952b5d1c29a-merged.mount: Deactivated successfully.
Feb  1 09:50:30 np0005604375 podman[76953]: 2026-02-01 14:50:30.264347957 +0000 UTC m=+0.618771470 container remove 495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7 (image=quay.io/ceph/ceph:v20, name=flamboyant_feistel, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:30 np0005604375 systemd[1]: libpod-conmon-495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7.scope: Deactivated successfully.
Feb  1 09:50:30 np0005604375 podman[77051]: 2026-02-01 14:50:30.320730403 +0000 UTC m=+0.044533221 container create 60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc (image=quay.io/ceph/ceph:v20, name=admiring_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  1 09:50:30 np0005604375 systemd[1]: Started libpod-conmon-60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc.scope.
Feb  1 09:50:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054701 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:50:30 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:30 np0005604375 podman[77051]: 2026-02-01 14:50:30.295420836 +0000 UTC m=+0.019223724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:30 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7baf343193c4b14e30fa283061e316a0f648c48cfeaaacb3d112c974835176a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:30 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7baf343193c4b14e30fa283061e316a0f648c48cfeaaacb3d112c974835176a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:30 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7baf343193c4b14e30fa283061e316a0f648c48cfeaaacb3d112c974835176a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:30 np0005604375 podman[77023]: 2026-02-01 14:50:30.399187153 +0000 UTC m=+0.446550057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:30 np0005604375 podman[77051]: 2026-02-01 14:50:30.411211163 +0000 UTC m=+0.135014021 container init 60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc (image=quay.io/ceph/ceph:v20, name=admiring_northcutt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Feb  1 09:50:30 np0005604375 podman[77051]: 2026-02-01 14:50:30.416284777 +0000 UTC m=+0.140087595 container start 60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc (image=quay.io/ceph/ceph:v20, name=admiring_northcutt, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  1 09:50:30 np0005604375 podman[77051]: 2026-02-01 14:50:30.419676163 +0000 UTC m=+0.143479021 container attach 60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc (image=quay.io/ceph/ceph:v20, name=admiring_northcutt, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  1 09:50:30 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:30 np0005604375 podman[77087]: 2026-02-01 14:50:30.510178284 +0000 UTC m=+0.042535335 container create 5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7 (image=quay.io/ceph/ceph:v20, name=ecstatic_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:30 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:30 np0005604375 ceph-mon[75179]: Added host compute-0
Feb  1 09:50:30 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:30 np0005604375 systemd[1]: Started libpod-conmon-5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7.scope.
Feb  1 09:50:30 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:30 np0005604375 podman[77087]: 2026-02-01 14:50:30.573418603 +0000 UTC m=+0.105775674 container init 5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7 (image=quay.io/ceph/ceph:v20, name=ecstatic_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:30 np0005604375 podman[77087]: 2026-02-01 14:50:30.580842893 +0000 UTC m=+0.113199934 container start 5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7 (image=quay.io/ceph/ceph:v20, name=ecstatic_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:30 np0005604375 podman[77087]: 2026-02-01 14:50:30.585149335 +0000 UTC m=+0.117506426 container attach 5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7 (image=quay.io/ceph/ceph:v20, name=ecstatic_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  1 09:50:30 np0005604375 podman[77087]: 2026-02-01 14:50:30.489974662 +0000 UTC m=+0.022331723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:30 np0005604375 ecstatic_payne[77105]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb  1 09:50:30 np0005604375 systemd[1]: libpod-5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7.scope: Deactivated successfully.
Feb  1 09:50:30 np0005604375 podman[77087]: 2026-02-01 14:50:30.69415299 +0000 UTC m=+0.226510051 container died 5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7 (image=quay.io/ceph/ceph:v20, name=ecstatic_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb  1 09:50:30 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c30dc1dfd370a4f924baadeade66151a1b606eaa1a4441ce8b832f5eb7d46146-merged.mount: Deactivated successfully.
Feb  1 09:50:30 np0005604375 podman[77087]: 2026-02-01 14:50:30.733335718 +0000 UTC m=+0.265692769 container remove 5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7 (image=quay.io/ceph/ceph:v20, name=ecstatic_payne, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:30 np0005604375 systemd[1]: libpod-conmon-5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7.scope: Deactivated successfully.
Feb  1 09:50:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Feb  1 09:50:30 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:30 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:50:30 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Saving service mgr spec with placement count:2
Feb  1 09:50:30 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Feb  1 09:50:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  1 09:50:30 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:30 np0005604375 admiring_northcutt[77067]: Scheduled mgr update...
Feb  1 09:50:30 np0005604375 systemd[1]: libpod-60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc.scope: Deactivated successfully.
Feb  1 09:50:30 np0005604375 podman[77051]: 2026-02-01 14:50:30.877572149 +0000 UTC m=+0.601374957 container died 60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc (image=quay.io/ceph/ceph:v20, name=admiring_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 09:50:30 np0005604375 systemd[1]: var-lib-containers-storage-overlay-e7baf343193c4b14e30fa283061e316a0f648c48cfeaaacb3d112c974835176a-merged.mount: Deactivated successfully.
Feb  1 09:50:30 np0005604375 podman[77051]: 2026-02-01 14:50:30.919035512 +0000 UTC m=+0.642838320 container remove 60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc (image=quay.io/ceph/ceph:v20, name=admiring_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:30 np0005604375 systemd[1]: libpod-conmon-60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc.scope: Deactivated successfully.
Feb  1 09:50:30 np0005604375 podman[77204]: 2026-02-01 14:50:30.970160659 +0000 UTC m=+0.037052630 container create 752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e (image=quay.io/ceph/ceph:v20, name=angry_haslett, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:31 np0005604375 systemd[1]: Started libpod-conmon-752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e.scope.
Feb  1 09:50:31 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cfb63ff9acb17ff38dd12adc5dbd9a275fab4bcd581323f74e2f788b0fa2d2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cfb63ff9acb17ff38dd12adc5dbd9a275fab4bcd581323f74e2f788b0fa2d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cfb63ff9acb17ff38dd12adc5dbd9a275fab4bcd581323f74e2f788b0fa2d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:31 np0005604375 podman[77204]: 2026-02-01 14:50:30.951626974 +0000 UTC m=+0.018518975 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:31 np0005604375 podman[77204]: 2026-02-01 14:50:31.050912324 +0000 UTC m=+0.117804335 container init 752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e (image=quay.io/ceph/ceph:v20, name=angry_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  1 09:50:31 np0005604375 podman[77204]: 2026-02-01 14:50:31.056945795 +0000 UTC m=+0.123837756 container start 752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e (image=quay.io/ceph/ceph:v20, name=angry_haslett, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 09:50:31 np0005604375 podman[77204]: 2026-02-01 14:50:31.060613008 +0000 UTC m=+0.127504999 container attach 752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e (image=quay.io/ceph/ceph:v20, name=angry_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  1 09:50:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:31 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:31 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:50:31 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Saving service crash spec with placement *
Feb  1 09:50:31 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Feb  1 09:50:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  1 09:50:31 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:31 np0005604375 angry_haslett[77221]: Scheduled crash update...
Feb  1 09:50:31 np0005604375 systemd[1]: libpod-752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e.scope: Deactivated successfully.
Feb  1 09:50:31 np0005604375 podman[77204]: 2026-02-01 14:50:31.52183806 +0000 UTC m=+0.588730021 container died 752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e (image=quay.io/ceph/ceph:v20, name=angry_haslett, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:31 np0005604375 systemd[1]: var-lib-containers-storage-overlay-77cfb63ff9acb17ff38dd12adc5dbd9a275fab4bcd581323f74e2f788b0fa2d2-merged.mount: Deactivated successfully.
Feb  1 09:50:31 np0005604375 podman[77204]: 2026-02-01 14:50:31.559774753 +0000 UTC m=+0.626666714 container remove 752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e (image=quay.io/ceph/ceph:v20, name=angry_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  1 09:50:31 np0005604375 systemd[1]: libpod-conmon-752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e.scope: Deactivated successfully.
Feb  1 09:50:31 np0005604375 podman[77334]: 2026-02-01 14:50:31.620727678 +0000 UTC m=+0.046157017 container create 820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae (image=quay.io/ceph/ceph:v20, name=priceless_brahmagupta, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:31 np0005604375 systemd[1]: Started libpod-conmon-820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae.scope.
Feb  1 09:50:31 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38774e566f468a52bb2b8ad7b325a0e42f19aab46afdac1daa317eb42fc2487/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38774e566f468a52bb2b8ad7b325a0e42f19aab46afdac1daa317eb42fc2487/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38774e566f468a52bb2b8ad7b325a0e42f19aab46afdac1daa317eb42fc2487/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:31 np0005604375 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  1 09:50:31 np0005604375 podman[77334]: 2026-02-01 14:50:31.597006547 +0000 UTC m=+0.022435846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:31 np0005604375 podman[77334]: 2026-02-01 14:50:31.703247273 +0000 UTC m=+0.128676612 container init 820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae (image=quay.io/ceph/ceph:v20, name=priceless_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  1 09:50:31 np0005604375 podman[77334]: 2026-02-01 14:50:31.707791782 +0000 UTC m=+0.133221091 container start 820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae (image=quay.io/ceph/ceph:v20, name=priceless_brahmagupta, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 09:50:31 np0005604375 podman[77334]: 2026-02-01 14:50:31.711410444 +0000 UTC m=+0.136839763 container attach 820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae (image=quay.io/ceph/ceph:v20, name=priceless_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 09:50:31 np0005604375 podman[77389]: 2026-02-01 14:50:31.777276338 +0000 UTC m=+0.052319312 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  1 09:50:31 np0005604375 ceph-mon[75179]: Saving service mon spec with placement count:5
Feb  1 09:50:31 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:31 np0005604375 ceph-mon[75179]: Saving service mgr spec with placement count:2
Feb  1 09:50:31 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:31 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:31 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:31 np0005604375 podman[77428]: 2026-02-01 14:50:31.94450596 +0000 UTC m=+0.049886433 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:31 np0005604375 podman[77389]: 2026-02-01 14:50:31.951845968 +0000 UTC m=+0.226888892 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  1 09:50:32 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Feb  1 09:50:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/700280739' entity='client.admin' 
Feb  1 09:50:32 np0005604375 systemd[1]: libpod-820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae.scope: Deactivated successfully.
Feb  1 09:50:32 np0005604375 podman[77334]: 2026-02-01 14:50:32.138014496 +0000 UTC m=+0.563443815 container died 820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae (image=quay.io/ceph/ceph:v20, name=priceless_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  1 09:50:32 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f38774e566f468a52bb2b8ad7b325a0e42f19aab46afdac1daa317eb42fc2487-merged.mount: Deactivated successfully.
Feb  1 09:50:32 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:32 np0005604375 podman[77334]: 2026-02-01 14:50:32.176911726 +0000 UTC m=+0.602341045 container remove 820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae (image=quay.io/ceph/ceph:v20, name=priceless_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  1 09:50:32 np0005604375 systemd[1]: libpod-conmon-820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae.scope: Deactivated successfully.
Feb  1 09:50:32 np0005604375 podman[77505]: 2026-02-01 14:50:32.249969164 +0000 UTC m=+0.050577423 container create c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644 (image=quay.io/ceph/ceph:v20, name=stoic_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:32 np0005604375 systemd[1]: Started libpod-conmon-c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644.scope.
Feb  1 09:50:32 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:32 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be332acf79f7712925ef8e6343537187dfbb27cde2f745a0f6c45eb7b8223ad6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:32 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be332acf79f7712925ef8e6343537187dfbb27cde2f745a0f6c45eb7b8223ad6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:32 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be332acf79f7712925ef8e6343537187dfbb27cde2f745a0f6c45eb7b8223ad6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:32 np0005604375 podman[77505]: 2026-02-01 14:50:32.309260651 +0000 UTC m=+0.109868930 container init c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644 (image=quay.io/ceph/ceph:v20, name=stoic_rosalind, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  1 09:50:32 np0005604375 podman[77505]: 2026-02-01 14:50:32.314398797 +0000 UTC m=+0.115007076 container start c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644 (image=quay.io/ceph/ceph:v20, name=stoic_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  1 09:50:32 np0005604375 podman[77505]: 2026-02-01 14:50:32.317617908 +0000 UTC m=+0.118226187 container attach c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644 (image=quay.io/ceph/ceph:v20, name=stoic_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:32 np0005604375 podman[77505]: 2026-02-01 14:50:32.233618111 +0000 UTC m=+0.034226390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:32 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:32 np0005604375 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77604 (sysctl)
Feb  1 09:50:32 np0005604375 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb  1 09:50:32 np0005604375 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb  1 09:50:32 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:50:32 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Feb  1 09:50:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:32 np0005604375 systemd[1]: libpod-c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644.scope: Deactivated successfully.
Feb  1 09:50:32 np0005604375 podman[77505]: 2026-02-01 14:50:32.72974056 +0000 UTC m=+0.530348859 container died c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644 (image=quay.io/ceph/ceph:v20, name=stoic_rosalind, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  1 09:50:32 np0005604375 systemd[1]: var-lib-containers-storage-overlay-be332acf79f7712925ef8e6343537187dfbb27cde2f745a0f6c45eb7b8223ad6-merged.mount: Deactivated successfully.
Feb  1 09:50:32 np0005604375 podman[77505]: 2026-02-01 14:50:32.764712699 +0000 UTC m=+0.565320968 container remove c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644 (image=quay.io/ceph/ceph:v20, name=stoic_rosalind, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  1 09:50:32 np0005604375 systemd[1]: libpod-conmon-c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644.scope: Deactivated successfully.
Feb  1 09:50:32 np0005604375 ceph-mon[75179]: Saving service crash spec with placement *
Feb  1 09:50:32 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/700280739' entity='client.admin' 
Feb  1 09:50:32 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:32 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:32 np0005604375 podman[77643]: 2026-02-01 14:50:32.824628815 +0000 UTC m=+0.044021957 container create a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01 (image=quay.io/ceph/ceph:v20, name=brave_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:32 np0005604375 systemd[1]: Started libpod-conmon-a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01.scope.
Feb  1 09:50:32 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:32 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95cef01d1e5a484f58d803b47059f8b817e7f59e0d6fa5b9d088094fde6f292a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:32 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95cef01d1e5a484f58d803b47059f8b817e7f59e0d6fa5b9d088094fde6f292a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:32 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95cef01d1e5a484f58d803b47059f8b817e7f59e0d6fa5b9d088094fde6f292a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:32 np0005604375 podman[77643]: 2026-02-01 14:50:32.890390436 +0000 UTC m=+0.109783628 container init a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01 (image=quay.io/ceph/ceph:v20, name=brave_golick, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  1 09:50:32 np0005604375 podman[77643]: 2026-02-01 14:50:32.894837931 +0000 UTC m=+0.114231083 container start a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01 (image=quay.io/ceph/ceph:v20, name=brave_golick, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  1 09:50:32 np0005604375 podman[77643]: 2026-02-01 14:50:32.898630959 +0000 UTC m=+0.118024151 container attach a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01 (image=quay.io/ceph/ceph:v20, name=brave_golick, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:32 np0005604375 podman[77643]: 2026-02-01 14:50:32.810768992 +0000 UTC m=+0.030162184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:33 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:50:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  1 09:50:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:33 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Added label _admin to host compute-0
Feb  1 09:50:33 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Feb  1 09:50:33 np0005604375 brave_golick[77704]: Added label _admin to host compute-0
Feb  1 09:50:33 np0005604375 systemd[1]: libpod-a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01.scope: Deactivated successfully.
Feb  1 09:50:33 np0005604375 podman[77643]: 2026-02-01 14:50:33.306173701 +0000 UTC m=+0.525566883 container died a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01 (image=quay.io/ceph/ceph:v20, name=brave_golick, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:33 np0005604375 systemd[1]: var-lib-containers-storage-overlay-95cef01d1e5a484f58d803b47059f8b817e7f59e0d6fa5b9d088094fde6f292a-merged.mount: Deactivated successfully.
Feb  1 09:50:33 np0005604375 podman[77643]: 2026-02-01 14:50:33.350207287 +0000 UTC m=+0.569600459 container remove a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01 (image=quay.io/ceph/ceph:v20, name=brave_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  1 09:50:33 np0005604375 systemd[1]: libpod-conmon-a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01.scope: Deactivated successfully.
Feb  1 09:50:33 np0005604375 podman[77822]: 2026-02-01 14:50:33.397968098 +0000 UTC m=+0.031617085 container create 41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:33 np0005604375 podman[77820]: 2026-02-01 14:50:33.425086196 +0000 UTC m=+0.055745899 container create f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099 (image=quay.io/ceph/ceph:v20, name=infallible_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:33 np0005604375 systemd[1]: Started libpod-conmon-41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119.scope.
Feb  1 09:50:33 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:33 np0005604375 systemd[1]: Started libpod-conmon-f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099.scope.
Feb  1 09:50:33 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:33 np0005604375 podman[77822]: 2026-02-01 14:50:33.467924628 +0000 UTC m=+0.101573665 container init 41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Feb  1 09:50:33 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba8a695c1a991e54d71b3a94c2024e023a03783f92fd927052f748756b7abd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:33 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba8a695c1a991e54d71b3a94c2024e023a03783f92fd927052f748756b7abd3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:33 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba8a695c1a991e54d71b3a94c2024e023a03783f92fd927052f748756b7abd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:33 np0005604375 podman[77822]: 2026-02-01 14:50:33.47790619 +0000 UTC m=+0.111555217 container start 41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 09:50:33 np0005604375 podman[77822]: 2026-02-01 14:50:33.382951404 +0000 UTC m=+0.016600421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:50:33 np0005604375 podman[77822]: 2026-02-01 14:50:33.481942835 +0000 UTC m=+0.115591822 container attach 41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:33 np0005604375 zen_shannon[77851]: 167 167
Feb  1 09:50:33 np0005604375 systemd[1]: libpod-41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119.scope: Deactivated successfully.
Feb  1 09:50:33 np0005604375 podman[77820]: 2026-02-01 14:50:33.485747082 +0000 UTC m=+0.116406805 container init f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099 (image=quay.io/ceph/ceph:v20, name=infallible_proskuriakova, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Feb  1 09:50:33 np0005604375 podman[77822]: 2026-02-01 14:50:33.486809142 +0000 UTC m=+0.120458139 container died 41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Feb  1 09:50:33 np0005604375 podman[77820]: 2026-02-01 14:50:33.492325358 +0000 UTC m=+0.122985061 container start f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099 (image=quay.io/ceph/ceph:v20, name=infallible_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:33 np0005604375 podman[77820]: 2026-02-01 14:50:33.49907964 +0000 UTC m=+0.129739363 container attach f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099 (image=quay.io/ceph/ceph:v20, name=infallible_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb  1 09:50:33 np0005604375 podman[77820]: 2026-02-01 14:50:33.405946214 +0000 UTC m=+0.036606017 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:33 np0005604375 systemd[1]: var-lib-containers-storage-overlay-77c3acc5c5e3b72f5d61b390302d65d8e9e100c973cd147ad426d77f19c37d8f-merged.mount: Deactivated successfully.
Feb  1 09:50:33 np0005604375 podman[77822]: 2026-02-01 14:50:33.521872145 +0000 UTC m=+0.155521142 container remove 41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  1 09:50:33 np0005604375 systemd[1]: libpod-conmon-41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119.scope: Deactivated successfully.
Feb  1 09:50:33 np0005604375 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  1 09:50:33 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:33 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Feb  1 09:50:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/117713498' entity='client.admin' 
Feb  1 09:50:34 np0005604375 infallible_proskuriakova[77857]: set mgr/dashboard/cluster/status
Feb  1 09:50:34 np0005604375 systemd[1]: libpod-f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099.scope: Deactivated successfully.
Feb  1 09:50:34 np0005604375 podman[77820]: 2026-02-01 14:50:34.074808671 +0000 UTC m=+0.705468414 container died f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099 (image=quay.io/ceph/ceph:v20, name=infallible_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 09:50:34 np0005604375 systemd[1]: var-lib-containers-storage-overlay-4ba8a695c1a991e54d71b3a94c2024e023a03783f92fd927052f748756b7abd3-merged.mount: Deactivated successfully.
Feb  1 09:50:34 np0005604375 podman[77820]: 2026-02-01 14:50:34.1154061 +0000 UTC m=+0.746065813 container remove f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099 (image=quay.io/ceph/ceph:v20, name=infallible_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  1 09:50:34 np0005604375 systemd[1]: libpod-conmon-f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099.scope: Deactivated successfully.
Feb  1 09:50:34 np0005604375 systemd[1]: Reloading.
Feb  1 09:50:34 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:50:34 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:50:34 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:34 np0005604375 podman[77957]: 2026-02-01 14:50:34.531952536 +0000 UTC m=+0.044115729 container create a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 09:50:34 np0005604375 systemd[1]: Started libpod-conmon-a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d.scope.
Feb  1 09:50:34 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:34 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ded3bc62bc9074de5cf7dd65ca2a478401a3774b6962c10a9745b82c5d22660/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:34 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ded3bc62bc9074de5cf7dd65ca2a478401a3774b6962c10a9745b82c5d22660/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:34 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ded3bc62bc9074de5cf7dd65ca2a478401a3774b6962c10a9745b82c5d22660/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:34 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ded3bc62bc9074de5cf7dd65ca2a478401a3774b6962c10a9745b82c5d22660/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:34 np0005604375 podman[77957]: 2026-02-01 14:50:34.509040408 +0000 UTC m=+0.021203651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:50:34 np0005604375 podman[77957]: 2026-02-01 14:50:34.62007804 +0000 UTC m=+0.132241273 container init a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_curran, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Feb  1 09:50:34 np0005604375 podman[77957]: 2026-02-01 14:50:34.635052493 +0000 UTC m=+0.147215676 container start a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_curran, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Feb  1 09:50:34 np0005604375 podman[77957]: 2026-02-01 14:50:34.638848721 +0000 UTC m=+0.151011964 container attach a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_curran, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:34 np0005604375 python3[78003]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:50:35 np0005604375 podman[78009]: 2026-02-01 14:50:35.014027127 +0000 UTC m=+0.071329859 container create 56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c (image=quay.io/ceph/ceph:v20, name=awesome_kapitsa, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:35 np0005604375 systemd[1]: Started libpod-conmon-56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c.scope.
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: Added label _admin to host compute-0
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/117713498' entity='client.admin' 
Feb  1 09:50:35 np0005604375 podman[78009]: 2026-02-01 14:50:34.982494005 +0000 UTC m=+0.039796837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:35 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b56feefdd05c9c6949dcb1520c316e544b366b46804e2abf60303f6831df8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b56feefdd05c9c6949dcb1520c316e544b366b46804e2abf60303f6831df8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:35 np0005604375 podman[78009]: 2026-02-01 14:50:35.112172894 +0000 UTC m=+0.169475656 container init 56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c (image=quay.io/ceph/ceph:v20, name=awesome_kapitsa, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Feb  1 09:50:35 np0005604375 podman[78009]: 2026-02-01 14:50:35.12083617 +0000 UTC m=+0.178138922 container start 56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c (image=quay.io/ceph/ceph:v20, name=awesome_kapitsa, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  1 09:50:35 np0005604375 podman[78009]: 2026-02-01 14:50:35.124562405 +0000 UTC m=+0.181865177 container attach 56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c (image=quay.io/ceph/ceph:v20, name=awesome_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]: [
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:    {
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:        "available": false,
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:        "being_replaced": false,
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:        "ceph_device_lvm": false,
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:        "lsm_data": {},
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:        "lvs": [],
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:        "path": "/dev/sr0",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:        "rejected_reasons": [
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "Insufficient space (<5GB)",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "Has a FileSystem"
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:        ],
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:        "sys_api": {
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "actuators": null,
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "device_nodes": [
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:                "sr0"
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            ],
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "devname": "sr0",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "human_readable_size": "482.00 KB",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "id_bus": "ata",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "model": "QEMU DVD-ROM",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "nr_requests": "2",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "parent": "/dev/sr0",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "partitions": {},
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "path": "/dev/sr0",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "removable": "1",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "rev": "2.5+",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "ro": "0",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "rotational": "1",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "sas_address": "",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "sas_device_handle": "",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "scheduler_mode": "mq-deadline",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "sectors": 0,
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "sectorsize": "2048",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "size": 493568.0,
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "support_discard": "2048",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "type": "disk",
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:            "vendor": "QEMU"
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:        }
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]:    }
Feb  1 09:50:35 np0005604375 compassionate_curran[77973]: ]
Feb  1 09:50:35 np0005604375 systemd[1]: libpod-a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d.scope: Deactivated successfully.
Feb  1 09:50:35 np0005604375 podman[77957]: 2026-02-01 14:50:35.197289383 +0000 UTC m=+0.709452536 container died a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:35 np0005604375 systemd[1]: var-lib-containers-storage-overlay-6ded3bc62bc9074de5cf7dd65ca2a478401a3774b6962c10a9745b82c5d22660-merged.mount: Deactivated successfully.
Feb  1 09:50:35 np0005604375 podman[77957]: 2026-02-01 14:50:35.235829553 +0000 UTC m=+0.747992696 container remove a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_curran, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:35 np0005604375 systemd[1]: libpod-conmon-a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d.scope: Deactivated successfully.
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:50:35 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb  1 09:50:35 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Feb  1 09:50:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1889255596' entity='client.admin' 
Feb  1 09:50:35 np0005604375 systemd[1]: libpod-56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c.scope: Deactivated successfully.
Feb  1 09:50:35 np0005604375 podman[78009]: 2026-02-01 14:50:35.566723627 +0000 UTC m=+0.624026349 container died 56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c (image=quay.io/ceph/ceph:v20, name=awesome_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:35 np0005604375 systemd[1]: var-lib-containers-storage-overlay-ca3b56feefdd05c9c6949dcb1520c316e544b366b46804e2abf60303f6831df8-merged.mount: Deactivated successfully.
Feb  1 09:50:35 np0005604375 podman[78009]: 2026-02-01 14:50:35.59722304 +0000 UTC m=+0.654525762 container remove 56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c (image=quay.io/ceph/ceph:v20, name=awesome_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:35 np0005604375 systemd[1]: libpod-conmon-56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c.scope: Deactivated successfully.
Feb  1 09:50:35 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.conf
Feb  1 09:50:35 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.conf
Feb  1 09:50:35 np0005604375 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  1 09:50:36 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  1 09:50:36 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  1 09:50:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  1 09:50:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:50:36 np0005604375 ceph-mon[75179]: Updating compute-0:/etc/ceph/ceph.conf
Feb  1 09:50:36 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/1889255596' entity='client.admin' 
Feb  1 09:50:36 np0005604375 ceph-mon[75179]: Updating compute-0:/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.conf
Feb  1 09:50:36 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:36 np0005604375 ansible-async_wrapper.py[79434]: Invoked with j351000907281 30 /home/zuul/.ansible/tmp/ansible-tmp-1769957435.8967826-36400-270880312506536/AnsiballZ_command.py _
Feb  1 09:50:36 np0005604375 ansible-async_wrapper.py[79509]: Starting module and watcher
Feb  1 09:50:36 np0005604375 ansible-async_wrapper.py[79509]: Start watching 79510 (30)
Feb  1 09:50:36 np0005604375 ansible-async_wrapper.py[79510]: Start module (79510)
Feb  1 09:50:36 np0005604375 ansible-async_wrapper.py[79434]: Return async_wrapper task started.
Feb  1 09:50:36 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.client.admin.keyring
Feb  1 09:50:36 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.client.admin.keyring
Feb  1 09:50:36 np0005604375 python3[79512]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:50:36 np0005604375 podman[79587]: 2026-02-01 14:50:36.673084564 +0000 UTC m=+0.042004530 container create dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9 (image=quay.io/ceph/ceph:v20, name=laughing_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Feb  1 09:50:36 np0005604375 systemd[1]: Started libpod-conmon-dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9.scope.
Feb  1 09:50:36 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:36 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73ce7a8b95741a11d17e73023e28b546866f9e4b02a1f23027b5faced8a52d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:36 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73ce7a8b95741a11d17e73023e28b546866f9e4b02a1f23027b5faced8a52d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:36 np0005604375 podman[79587]: 2026-02-01 14:50:36.647995514 +0000 UTC m=+0.016915500 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:36 np0005604375 podman[79587]: 2026-02-01 14:50:36.749216978 +0000 UTC m=+0.118136944 container init dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9 (image=quay.io/ceph/ceph:v20, name=laughing_panini, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:36 np0005604375 podman[79587]: 2026-02-01 14:50:36.755981069 +0000 UTC m=+0.124901045 container start dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9 (image=quay.io/ceph/ceph:v20, name=laughing_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:36 np0005604375 podman[79587]: 2026-02-01 14:50:36.763348218 +0000 UTC m=+0.132268174 container attach dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9 (image=quay.io/ceph/ceph:v20, name=laughing_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:37 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  1 09:50:37 np0005604375 laughing_panini[79651]: 
Feb  1 09:50:37 np0005604375 laughing_panini[79651]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  1 09:50:37 np0005604375 systemd[1]: libpod-dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9.scope: Deactivated successfully.
Feb  1 09:50:37 np0005604375 podman[79587]: 2026-02-01 14:50:37.140878941 +0000 UTC m=+0.509798907 container died dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9 (image=quay.io/ceph/ceph:v20, name=laughing_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  1 09:50:37 np0005604375 systemd[1]: var-lib-containers-storage-overlay-b73ce7a8b95741a11d17e73023e28b546866f9e4b02a1f23027b5faced8a52d4-merged.mount: Deactivated successfully.
Feb  1 09:50:37 np0005604375 podman[79587]: 2026-02-01 14:50:37.178084124 +0000 UTC m=+0.547004090 container remove dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9 (image=quay.io/ceph/ceph:v20, name=laughing_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:37 np0005604375 ansible-async_wrapper.py[79510]: Module complete (79510)
Feb  1 09:50:37 np0005604375 systemd[1]: libpod-conmon-dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9.scope: Deactivated successfully.
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:37 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev d78f5c1b-7fbd-477c-92ac-1b0c26828934 (Updating crash deployment (+1 -> 1))
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:50:37 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Feb  1 09:50:37 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: Updating compute-0:/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.client.admin.keyring
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb  1 09:50:37 np0005604375 ceph-mgr[75469]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Feb  1 09:50:37 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:50:37 np0005604375 ceph-mon[75179]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb  1 09:50:37 np0005604375 podman[80004]: 2026-02-01 14:50:37.825884495 +0000 UTC m=+0.051247982 container create 40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:37 np0005604375 systemd[1]: Started libpod-conmon-40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9.scope.
Feb  1 09:50:37 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:37 np0005604375 podman[80004]: 2026-02-01 14:50:37.797041978 +0000 UTC m=+0.022405495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:50:37 np0005604375 python3[80002]: ansible-ansible.legacy.async_status Invoked with jid=j351000907281.79434 mode=status _async_dir=/root/.ansible_async
Feb  1 09:50:37 np0005604375 podman[80004]: 2026-02-01 14:50:37.899607981 +0000 UTC m=+0.124971468 container init 40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb  1 09:50:37 np0005604375 podman[80004]: 2026-02-01 14:50:37.912195967 +0000 UTC m=+0.137559464 container start 40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_chebyshev, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  1 09:50:37 np0005604375 eloquent_chebyshev[80021]: 167 167
Feb  1 09:50:37 np0005604375 systemd[1]: libpod-40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9.scope: Deactivated successfully.
Feb  1 09:50:37 np0005604375 conmon[80021]: conmon 40531ed0ebb16ef0833e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9.scope/container/memory.events
Feb  1 09:50:37 np0005604375 podman[80004]: 2026-02-01 14:50:37.916237571 +0000 UTC m=+0.141601078 container attach 40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  1 09:50:37 np0005604375 podman[80004]: 2026-02-01 14:50:37.916606412 +0000 UTC m=+0.141969879 container died 40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:37 np0005604375 systemd[1]: var-lib-containers-storage-overlay-6244cb21e32c9b894c2a7cd9bac800c27e9142af7b9e62c025bafae637e91699-merged.mount: Deactivated successfully.
Feb  1 09:50:37 np0005604375 podman[80004]: 2026-02-01 14:50:37.963033264 +0000 UTC m=+0.188396731 container remove 40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_chebyshev, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  1 09:50:37 np0005604375 systemd[1]: libpod-conmon-40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9.scope: Deactivated successfully.
Feb  1 09:50:38 np0005604375 systemd[1]: Reloading.
Feb  1 09:50:38 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:50:38 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:50:38 np0005604375 systemd[1]: Reloading.
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: Deploying daemon crash.compute-0 on compute-0
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb  1 09:50:38 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:50:38 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:50:38 np0005604375 python3[80123]: ansible-ansible.legacy.async_status Invoked with jid=j351000907281.79434 mode=cleanup _async_dir=/root/.ansible_async
Feb  1 09:50:38 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:38 np0005604375 systemd[1]: Starting Ceph crash.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb  1 09:50:38 np0005604375 podman[80212]: 2026-02-01 14:50:38.667934121 +0000 UTC m=+0.052937039 container create 9bd6536237272ef86723b3eaf8a56f29fc7565963ed6c6d016eb80c9c8c15825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aa05ab3d03ad6e19ad26e698ba5de43b255f158b71fed164b628eac3d9658fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aa05ab3d03ad6e19ad26e698ba5de43b255f158b71fed164b628eac3d9658fe/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aa05ab3d03ad6e19ad26e698ba5de43b255f158b71fed164b628eac3d9658fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aa05ab3d03ad6e19ad26e698ba5de43b255f158b71fed164b628eac3d9658fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:38 np0005604375 podman[80212]: 2026-02-01 14:50:38.733686352 +0000 UTC m=+0.118689320 container init 9bd6536237272ef86723b3eaf8a56f29fc7565963ed6c6d016eb80c9c8c15825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  1 09:50:38 np0005604375 podman[80212]: 2026-02-01 14:50:38.738989172 +0000 UTC m=+0.123992090 container start 9bd6536237272ef86723b3eaf8a56f29fc7565963ed6c6d016eb80c9c8c15825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:38 np0005604375 podman[80212]: 2026-02-01 14:50:38.645114775 +0000 UTC m=+0.030117733 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:50:38 np0005604375 bash[80212]: 9bd6536237272ef86723b3eaf8a56f29fc7565963ed6c6d016eb80c9c8c15825
Feb  1 09:50:38 np0005604375 systemd[1]: Started Ceph crash.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:50:38 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: INFO:ceph-crash:pinging cluster to exercise our key
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:38 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev d78f5c1b-7fbd-477c-92ac-1b0c26828934 (Updating crash deployment (+1 -> 1))
Feb  1 09:50:38 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event d78f5c1b-7fbd-477c-92ac-1b0c26828934 (Updating crash deployment (+1 -> 1)) in 2 seconds
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:38 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev d9ea7757-a6e5-4932-8936-6b3fa39a3c39 (Updating mgr deployment (+1 -> 2))
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.rdxlja", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.rdxlja", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rdxlja", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mgr services"} : dispatch
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:50:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:50:38 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.rdxlja on compute-0
Feb  1 09:50:38 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.rdxlja on compute-0
Feb  1 09:50:38 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: 2026-02-01T14:50:38.876+0000 7fd39659a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb  1 09:50:38 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: 2026-02-01T14:50:38.876+0000 7fd39659a640 -1 AuthRegistry(0x7fd390052930) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb  1 09:50:38 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: 2026-02-01T14:50:38.878+0000 7fd39659a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb  1 09:50:38 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: 2026-02-01T14:50:38.878+0000 7fd39659a640 -1 AuthRegistry(0x7fd396598fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb  1 09:50:38 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: 2026-02-01T14:50:38.879+0000 7fd38ffff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Feb  1 09:50:38 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: 2026-02-01T14:50:38.880+0000 7fd39659a640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Feb  1 09:50:38 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: [errno 13] RADOS permission denied (error connecting to the cluster)
Feb  1 09:50:38 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Feb  1 09:50:38 np0005604375 python3[80257]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  1 09:50:39 np0005604375 podman[80388]: 2026-02-01 14:50:39.392391141 +0000 UTC m=+0.050838420 container create c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:39 np0005604375 python3[80370]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:50:39 np0005604375 systemd[1]: Started libpod-conmon-c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b.scope.
Feb  1 09:50:39 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:39 np0005604375 podman[80402]: 2026-02-01 14:50:39.455006273 +0000 UTC m=+0.035274379 container create bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1 (image=quay.io/ceph/ceph:v20, name=zealous_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  1 09:50:39 np0005604375 podman[80388]: 2026-02-01 14:50:39.369143753 +0000 UTC m=+0.027591122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:50:39 np0005604375 podman[80388]: 2026-02-01 14:50:39.468716821 +0000 UTC m=+0.127164130 container init c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hermann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:39 np0005604375 podman[80388]: 2026-02-01 14:50:39.47397177 +0000 UTC m=+0.132419059 container start c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hermann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:39 np0005604375 podman[80388]: 2026-02-01 14:50:39.476835021 +0000 UTC m=+0.135282310 container attach c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hermann, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  1 09:50:39 np0005604375 elastic_hermann[80416]: 167 167
Feb  1 09:50:39 np0005604375 podman[80388]: 2026-02-01 14:50:39.477990343 +0000 UTC m=+0.136437622 container died c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hermann, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:39 np0005604375 systemd[1]: Started libpod-conmon-bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1.scope.
Feb  1 09:50:39 np0005604375 systemd[1]: libpod-c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b.scope: Deactivated successfully.
Feb  1 09:50:39 np0005604375 systemd[1]: var-lib-containers-storage-overlay-8e985f998fbf9d88d5079f64574a61b6c0b88c047c2965606611060653933252-merged.mount: Deactivated successfully.
Feb  1 09:50:39 np0005604375 podman[80388]: 2026-02-01 14:50:39.511851511 +0000 UTC m=+0.170298780 container remove c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  1 09:50:39 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:39 np0005604375 systemd[1]: libpod-conmon-c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b.scope: Deactivated successfully.
Feb  1 09:50:39 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80189b7518f542db910caed96867e9c25dd5cdd5e9b12fe41a073faf2c540e22/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:39 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80189b7518f542db910caed96867e9c25dd5cdd5e9b12fe41a073faf2c540e22/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:39 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80189b7518f542db910caed96867e9c25dd5cdd5e9b12fe41a073faf2c540e22/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:39 np0005604375 podman[80402]: 2026-02-01 14:50:39.438446124 +0000 UTC m=+0.018714250 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:39 np0005604375 podman[80402]: 2026-02-01 14:50:39.539494794 +0000 UTC m=+0.119762920 container init bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1 (image=quay.io/ceph/ceph:v20, name=zealous_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  1 09:50:39 np0005604375 podman[80402]: 2026-02-01 14:50:39.544643689 +0000 UTC m=+0.124911805 container start bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1 (image=quay.io/ceph/ceph:v20, name=zealous_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  1 09:50:39 np0005604375 podman[80402]: 2026-02-01 14:50:39.547588803 +0000 UTC m=+0.127856919 container attach bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1 (image=quay.io/ceph/ceph:v20, name=zealous_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  1 09:50:39 np0005604375 systemd[1]: Reloading.
Feb  1 09:50:39 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:50:39 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:50:39 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:50:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.rdxlja", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb  1 09:50:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rdxlja", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb  1 09:50:39 np0005604375 ceph-mon[75179]: Deploying daemon mgr.compute-0.rdxlja on compute-0
Feb  1 09:50:39 np0005604375 systemd[1]: Reloading.
Feb  1 09:50:39 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:50:39 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:50:39 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  1 09:50:39 np0005604375 zealous_roentgen[80432]: 
Feb  1 09:50:39 np0005604375 zealous_roentgen[80432]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  1 09:50:39 np0005604375 podman[80402]: 2026-02-01 14:50:39.953874719 +0000 UTC m=+0.534142835 container died bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1 (image=quay.io/ceph/ceph:v20, name=zealous_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  1 09:50:40 np0005604375 systemd[1]: libpod-bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1.scope: Deactivated successfully.
Feb  1 09:50:40 np0005604375 podman[80402]: 2026-02-01 14:50:40.055276949 +0000 UTC m=+0.635545055 container remove bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1 (image=quay.io/ceph/ceph:v20, name=zealous_roentgen, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:40 np0005604375 systemd[1]: Starting Ceph mgr.compute-0.rdxlja for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb  1 09:50:40 np0005604375 systemd[1]: var-lib-containers-storage-overlay-80189b7518f542db910caed96867e9c25dd5cdd5e9b12fe41a073faf2c540e22-merged.mount: Deactivated successfully.
Feb  1 09:50:40 np0005604375 systemd[1]: libpod-conmon-bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1.scope: Deactivated successfully.
Feb  1 09:50:40 np0005604375 podman[80602]: 2026-02-01 14:50:40.321019429 +0000 UTC m=+0.052804906 container create 7d94e38ece1e596fcae1a22687cbf142d9da3658707ee62f5feff59dab4a0686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  1 09:50:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193743347c78a8686387b1b48b1e59b45f5b5e6c93d270c616f42c5638d1a374/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193743347c78a8686387b1b48b1e59b45f5b5e6c93d270c616f42c5638d1a374/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193743347c78a8686387b1b48b1e59b45f5b5e6c93d270c616f42c5638d1a374/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193743347c78a8686387b1b48b1e59b45f5b5e6c93d270c616f42c5638d1a374/merged/var/lib/ceph/mgr/ceph-compute-0.rdxlja supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:40 np0005604375 podman[80602]: 2026-02-01 14:50:40.295172127 +0000 UTC m=+0.026957714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:50:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:50:40 np0005604375 podman[80602]: 2026-02-01 14:50:40.400561949 +0000 UTC m=+0.132347506 container init 7d94e38ece1e596fcae1a22687cbf142d9da3658707ee62f5feff59dab4a0686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:40 np0005604375 podman[80602]: 2026-02-01 14:50:40.411368115 +0000 UTC m=+0.143153622 container start 7d94e38ece1e596fcae1a22687cbf142d9da3658707ee62f5feff59dab4a0686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  1 09:50:40 np0005604375 bash[80602]: 7d94e38ece1e596fcae1a22687cbf142d9da3658707ee62f5feff59dab4a0686
Feb  1 09:50:40 np0005604375 systemd[1]: Started Ceph mgr.compute-0.rdxlja for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:50:40 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:50:40 np0005604375 ceph-mgr[80645]: set uid:gid to 167:167 (ceph:ceph)
Feb  1 09:50:40 np0005604375 ceph-mgr[80645]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb  1 09:50:40 np0005604375 ceph-mgr[80645]: pidfile_write: ignore empty --pid-file
Feb  1 09:50:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  1 09:50:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:40 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev d9ea7757-a6e5-4932-8936-6b3fa39a3c39 (Updating mgr deployment (+1 -> 2))
Feb  1 09:50:40 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event d9ea7757-a6e5-4932-8936-6b3fa39a3c39 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Feb  1 09:50:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  1 09:50:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:40 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'alerts'
Feb  1 09:50:40 np0005604375 python3[80647]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:50:40 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'balancer'
Feb  1 09:50:40 np0005604375 podman[80716]: 2026-02-01 14:50:40.659135266 +0000 UTC m=+0.049058959 container create 15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf (image=quay.io/ceph/ceph:v20, name=peaceful_spence, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:40 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'cephadm'
Feb  1 09:50:40 np0005604375 systemd[1]: Started libpod-conmon-15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf.scope.
Feb  1 09:50:40 np0005604375 podman[80716]: 2026-02-01 14:50:40.634994013 +0000 UTC m=+0.024917706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:40 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d74b70ae94a8faa0b6d96c71df9c912adb557bff9d9c3f703452bde108c49c0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d74b70ae94a8faa0b6d96c71df9c912adb557bff9d9c3f703452bde108c49c0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d74b70ae94a8faa0b6d96c71df9c912adb557bff9d9c3f703452bde108c49c0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:40 np0005604375 podman[80716]: 2026-02-01 14:50:40.759332392 +0000 UTC m=+0.149256155 container init 15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf (image=quay.io/ceph/ceph:v20, name=peaceful_spence, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  1 09:50:40 np0005604375 podman[80716]: 2026-02-01 14:50:40.766632438 +0000 UTC m=+0.156556091 container start 15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf (image=quay.io/ceph/ceph:v20, name=peaceful_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:40 np0005604375 podman[80716]: 2026-02-01 14:50:40.769749266 +0000 UTC m=+0.159672959 container attach 15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf (image=quay.io/ceph/ceph:v20, name=peaceful_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:41 np0005604375 podman[80828]: 2026-02-01 14:50:41.110103562 +0000 UTC m=+0.063413469 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3286271513' entity='client.admin' 
Feb  1 09:50:41 np0005604375 systemd[1]: libpod-15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf.scope: Deactivated successfully.
Feb  1 09:50:41 np0005604375 podman[80716]: 2026-02-01 14:50:41.166919215 +0000 UTC m=+0.556842928 container died 15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf (image=quay.io/ceph/ceph:v20, name=peaceful_spence, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  1 09:50:41 np0005604375 systemd[1]: var-lib-containers-storage-overlay-2d74b70ae94a8faa0b6d96c71df9c912adb557bff9d9c3f703452bde108c49c0-merged.mount: Deactivated successfully.
Feb  1 09:50:41 np0005604375 podman[80716]: 2026-02-01 14:50:41.21268511 +0000 UTC m=+0.602608803 container remove 15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf (image=quay.io/ceph/ceph:v20, name=peaceful_spence, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:41 np0005604375 podman[80828]: 2026-02-01 14:50:41.21302354 +0000 UTC m=+0.166333427 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  1 09:50:41 np0005604375 systemd[1]: libpod-conmon-15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf.scope: Deactivated successfully.
Feb  1 09:50:41 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'crash'
Feb  1 09:50:41 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'dashboard'
Feb  1 09:50:41 np0005604375 ansible-async_wrapper.py[79509]: Done in kid B.
Feb  1 09:50:41 np0005604375 python3[80949]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:50:41 np0005604375 podman[80968]: 2026-02-01 14:50:41.571475344 +0000 UTC m=+0.040844650 container create 6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744 (image=quay.io/ceph/ceph:v20, name=stupefied_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  1 09:50:41 np0005604375 systemd[1]: Started libpod-conmon-6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744.scope.
Feb  1 09:50:41 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0045346ac2410a2a60cb1671d16fca276a060046dc4455a8953306514ca4e3f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0045346ac2410a2a60cb1671d16fca276a060046dc4455a8953306514ca4e3f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0045346ac2410a2a60cb1671d16fca276a060046dc4455a8953306514ca4e3f0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:50:41 np0005604375 podman[80968]: 2026-02-01 14:50:41.640829088 +0000 UTC m=+0.110198414 container init 6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744 (image=quay.io/ceph/ceph:v20, name=stupefied_dubinsky, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:41 np0005604375 podman[80968]: 2026-02-01 14:50:41.645363203 +0000 UTC m=+0.114732519 container start 6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744 (image=quay.io/ceph/ceph:v20, name=stupefied_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  1 09:50:41 np0005604375 podman[80968]: 2026-02-01 14:50:41.553981196 +0000 UTC m=+0.023350622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:41 np0005604375 podman[80968]: 2026-02-01 14:50:41.650485684 +0000 UTC m=+0.119855010 container attach 6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744 (image=quay.io/ceph/ceph:v20, name=stupefied_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:41 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:50:41 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Feb  1 09:50:41 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:50:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:50:41 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Feb  1 09:50:41 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4238767418' entity='client.admin' 
Feb  1 09:50:42 np0005604375 systemd[1]: libpod-6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744.scope: Deactivated successfully.
Feb  1 09:50:42 np0005604375 podman[80968]: 2026-02-01 14:50:42.03561016 +0000 UTC m=+0.504979486 container died 6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744 (image=quay.io/ceph/ceph:v20, name=stupefied_dubinsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  1 09:50:42 np0005604375 systemd[1]: var-lib-containers-storage-overlay-0045346ac2410a2a60cb1671d16fca276a060046dc4455a8953306514ca4e3f0-merged.mount: Deactivated successfully.
Feb  1 09:50:42 np0005604375 podman[80968]: 2026-02-01 14:50:42.068999468 +0000 UTC m=+0.538368774 container remove 6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744 (image=quay.io/ceph/ceph:v20, name=stupefied_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:42 np0005604375 systemd[1]: libpod-conmon-6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744.scope: Deactivated successfully.
Feb  1 09:50:42 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'devicehealth'
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/3286271513' entity='client.admin' 
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: Reconfiguring mon.compute-0 (unknown last config time)...
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: Reconfiguring daemon mon.compute-0 on compute-0
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/4238767418' entity='client.admin' 
Feb  1 09:50:42 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'diskprediction_local'
Feb  1 09:50:42 np0005604375 podman[81126]: 2026-02-01 14:50:42.173271516 +0000 UTC m=+0.046118076 container create 5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e (image=quay.io/ceph/ceph:v20, name=boring_elion, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:42 np0005604375 systemd[1]: Started libpod-conmon-5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e.scope.
Feb  1 09:50:42 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:42 np0005604375 podman[81126]: 2026-02-01 14:50:42.14604314 +0000 UTC m=+0.018889710 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:42 np0005604375 podman[81126]: 2026-02-01 14:50:42.250525624 +0000 UTC m=+0.123372244 container init 5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e (image=quay.io/ceph/ceph:v20, name=boring_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:42 np0005604375 podman[81126]: 2026-02-01 14:50:42.257163131 +0000 UTC m=+0.130009691 container start 5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e (image=quay.io/ceph/ceph:v20, name=boring_elion, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  1 09:50:42 np0005604375 podman[81126]: 2026-02-01 14:50:42.260575962 +0000 UTC m=+0.133422552 container attach 5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e (image=quay.io/ceph/ceph:v20, name=boring_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:42 np0005604375 boring_elion[81143]: 167 167
Feb  1 09:50:42 np0005604375 systemd[1]: libpod-5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e.scope: Deactivated successfully.
Feb  1 09:50:42 np0005604375 podman[81126]: 2026-02-01 14:50:42.263792777 +0000 UTC m=+0.136639377 container died 5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e (image=quay.io/ceph/ceph:v20, name=boring_elion, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  1 09:50:42 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c0bcc7ca7b8a9ff4ed43aa01e09310cb8f3e09c166e094d73fe5b268bf59d12e-merged.mount: Deactivated successfully.
Feb  1 09:50:42 np0005604375 podman[81126]: 2026-02-01 14:50:42.308211993 +0000 UTC m=+0.181058553 container remove 5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e (image=quay.io/ceph/ceph:v20, name=boring_elion, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:42 np0005604375 systemd[1]: libpod-conmon-5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e.scope: Deactivated successfully.
Feb  1 09:50:42 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja[80617]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  1 09:50:42 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja[80617]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  1 09:50:42 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja[80617]:  from numpy import show_config as show_numpy_config
Feb  1 09:50:42 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'influx'
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:42 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.viosrg (unknown last config time)...
Feb  1 09:50:42 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.viosrg (unknown last config time)...
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.viosrg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.viosrg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mgr services"} : dispatch
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:50:42 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.viosrg on compute-0
Feb  1 09:50:42 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.viosrg on compute-0
Feb  1 09:50:42 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'insights'
Feb  1 09:50:42 np0005604375 python3[81180]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:50:42 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:42 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'iostat'
Feb  1 09:50:42 np0005604375 podman[81228]: 2026-02-01 14:50:42.488920114 +0000 UTC m=+0.037791350 container create 3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae (image=quay.io/ceph/ceph:v20, name=vigorous_austin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:42 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'k8sevents'
Feb  1 09:50:42 np0005604375 systemd[1]: Started libpod-conmon-3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae.scope.
Feb  1 09:50:42 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:42 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d4089cf7006ac7d282edbc91dd2533c10173438a6aff0fdd0220a982266cf8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:42 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d4089cf7006ac7d282edbc91dd2533c10173438a6aff0fdd0220a982266cf8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:42 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d4089cf7006ac7d282edbc91dd2533c10173438a6aff0fdd0220a982266cf8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:42 np0005604375 podman[81228]: 2026-02-01 14:50:42.566141301 +0000 UTC m=+0.115012567 container init 3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae (image=quay.io/ceph/ceph:v20, name=vigorous_austin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:42 np0005604375 podman[81228]: 2026-02-01 14:50:42.473652622 +0000 UTC m=+0.022523868 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:42 np0005604375 podman[81228]: 2026-02-01 14:50:42.57387237 +0000 UTC m=+0.122743636 container start 3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae (image=quay.io/ceph/ceph:v20, name=vigorous_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:42 np0005604375 podman[81228]: 2026-02-01 14:50:42.577477707 +0000 UTC m=+0.126348973 container attach 3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae (image=quay.io/ceph/ceph:v20, name=vigorous_austin, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:42 np0005604375 podman[81289]: 2026-02-01 14:50:42.774855172 +0000 UTC m=+0.051376902 container create 46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0 (image=quay.io/ceph/ceph:v20, name=thirsty_newton, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:42 np0005604375 systemd[1]: Started libpod-conmon-46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0.scope.
Feb  1 09:50:42 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:42 np0005604375 podman[81289]: 2026-02-01 14:50:42.826355487 +0000 UTC m=+0.102877217 container init 46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0 (image=quay.io/ceph/ceph:v20, name=thirsty_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:42 np0005604375 podman[81289]: 2026-02-01 14:50:42.830467739 +0000 UTC m=+0.106989489 container start 46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0 (image=quay.io/ceph/ceph:v20, name=thirsty_newton, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:42 np0005604375 thirsty_newton[81305]: 167 167
Feb  1 09:50:42 np0005604375 podman[81289]: 2026-02-01 14:50:42.833830649 +0000 UTC m=+0.110352379 container attach 46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0 (image=quay.io/ceph/ceph:v20, name=thirsty_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:42 np0005604375 systemd[1]: libpod-46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0.scope: Deactivated successfully.
Feb  1 09:50:42 np0005604375 podman[81289]: 2026-02-01 14:50:42.835352814 +0000 UTC m=+0.111874544 container died 46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0 (image=quay.io/ceph/ceph:v20, name=thirsty_newton, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:42 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'localpool'
Feb  1 09:50:42 np0005604375 podman[81289]: 2026-02-01 14:50:42.751878762 +0000 UTC m=+0.028400582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:42 np0005604375 podman[81289]: 2026-02-01 14:50:42.874416691 +0000 UTC m=+0.150938421 container remove 46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0 (image=quay.io/ceph/ceph:v20, name=thirsty_newton, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:42 np0005604375 systemd[1]: libpod-conmon-46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0.scope: Deactivated successfully.
Feb  1 09:50:42 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'mds_autoscaler'
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Feb  1 09:50:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3284079116' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb  1 09:50:43 np0005604375 systemd[1]: var-lib-containers-storage-overlay-ee0db40140f0fca9d088cf434a9da7450598494cd3d190285961ccd3c3bf3d0b-merged.mount: Deactivated successfully.
Feb  1 09:50:43 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'mirroring'
Feb  1 09:50:43 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'nfs'
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: Reconfiguring mgr.compute-0.viosrg (unknown last config time)...
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.viosrg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: Reconfiguring daemon mgr.compute-0.viosrg on compute-0
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/3284079116' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb  1 09:50:43 np0005604375 ceph-mgr[75469]: [progress INFO root] Writing back 2 completed events
Feb  1 09:50:43 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'orchestrator'
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:43 np0005604375 podman[81418]: 2026-02-01 14:50:43.544575117 +0000 UTC m=+0.083843614 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:43 np0005604375 podman[81418]: 2026-02-01 14:50:43.618733793 +0000 UTC m=+0.158002240 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:43 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'osd_perf_query'
Feb  1 09:50:43 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:50:43 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'osd_support'
Feb  1 09:50:43 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'pg_autoscaler'
Feb  1 09:50:43 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'progress'
Feb  1 09:50:43 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'prometheus'
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3284079116' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Feb  1 09:50:43 np0005604375 vigorous_austin[81251]: set require_min_compat_client to mimic
Feb  1 09:50:43 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Feb  1 09:50:43 np0005604375 systemd[1]: libpod-3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae.scope: Deactivated successfully.
Feb  1 09:50:43 np0005604375 podman[81228]: 2026-02-01 14:50:43.972719907 +0000 UTC m=+1.521591213 container died 3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae (image=quay.io/ceph/ceph:v20, name=vigorous_austin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:43 np0005604375 systemd[1]: var-lib-containers-storage-overlay-b7d4089cf7006ac7d282edbc91dd2533c10173438a6aff0fdd0220a982266cf8-merged.mount: Deactivated successfully.
Feb  1 09:50:44 np0005604375 podman[81228]: 2026-02-01 14:50:44.009113324 +0000 UTC m=+1.557984580 container remove 3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae (image=quay.io/ceph/ceph:v20, name=vigorous_austin, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 09:50:44 np0005604375 systemd[1]: libpod-conmon-3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae.scope: Deactivated successfully.
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:44 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'rbd_support'
Feb  1 09:50:44 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'rgw'
Feb  1 09:50:44 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/3284079116' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:50:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:44 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'rook'
Feb  1 09:50:44 np0005604375 python3[81591]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:50:44 np0005604375 podman[81592]: 2026-02-01 14:50:44.660239777 +0000 UTC m=+0.056672309 container create 4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568 (image=quay.io/ceph/ceph:v20, name=gracious_stonebraker, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  1 09:50:44 np0005604375 systemd[1]: Started libpod-conmon-4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568.scope.
Feb  1 09:50:44 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:44 np0005604375 podman[81592]: 2026-02-01 14:50:44.636212686 +0000 UTC m=+0.032645298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb34818f07695627e97bc0ca2001311f1586c8b57c6d8599af3159d971fec39/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb34818f07695627e97bc0ca2001311f1586c8b57c6d8599af3159d971fec39/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb34818f07695627e97bc0ca2001311f1586c8b57c6d8599af3159d971fec39/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:44 np0005604375 podman[81592]: 2026-02-01 14:50:44.753595832 +0000 UTC m=+0.150028364 container init 4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568 (image=quay.io/ceph/ceph:v20, name=gracious_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  1 09:50:44 np0005604375 podman[81592]: 2026-02-01 14:50:44.758035713 +0000 UTC m=+0.154468255 container start 4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568 (image=quay.io/ceph/ceph:v20, name=gracious_stonebraker, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:44 np0005604375 podman[81592]: 2026-02-01 14:50:44.76130373 +0000 UTC m=+0.157736312 container attach 4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568 (image=quay.io/ceph/ceph:v20, name=gracious_stonebraker, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Feb  1 09:50:45 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'selftest'
Feb  1 09:50:45 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'smb'
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:50:45 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'snap_schedule'
Feb  1 09:50:45 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'stats'
Feb  1 09:50:45 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'status'
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Added host compute-0
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Added host compute-0
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Saving service mon spec with placement compute-0
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  1 09:50:45 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'telegraf'
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev 7e382260-f02f-41d2-9f2f-3ca8953bdb76 (Updating mgr deployment (-1 -> 1))
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.rdxlja from compute-0 -- ports [8765]
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.rdxlja from compute-0 -- ports [8765]
Feb  1 09:50:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:45 np0005604375 gracious_stonebraker[81608]: Added host 'compute-0' with addr '192.168.122.100'
Feb  1 09:50:45 np0005604375 gracious_stonebraker[81608]: Scheduled mon update...
Feb  1 09:50:45 np0005604375 gracious_stonebraker[81608]: Scheduled mgr update...
Feb  1 09:50:45 np0005604375 gracious_stonebraker[81608]: Scheduled osd.default_drive_group update...
Feb  1 09:50:45 np0005604375 systemd[1]: libpod-4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568.scope: Deactivated successfully.
Feb  1 09:50:45 np0005604375 podman[81592]: 2026-02-01 14:50:45.673010029 +0000 UTC m=+1.069442571 container died 4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568 (image=quay.io/ceph/ceph:v20, name=gracious_stonebraker, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  1 09:50:45 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'telemetry'
Feb  1 09:50:45 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:50:45 np0005604375 systemd[1]: var-lib-containers-storage-overlay-5fb34818f07695627e97bc0ca2001311f1586c8b57c6d8599af3159d971fec39-merged.mount: Deactivated successfully.
Feb  1 09:50:45 np0005604375 podman[81592]: 2026-02-01 14:50:45.708619974 +0000 UTC m=+1.105052506 container remove 4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568 (image=quay.io/ceph/ceph:v20, name=gracious_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:45 np0005604375 systemd[1]: libpod-conmon-4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568.scope: Deactivated successfully.
Feb  1 09:50:45 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'test_orchestrator'
Feb  1 09:50:45 np0005604375 systemd[1]: Stopping Ceph mgr.compute-0.rdxlja for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb  1 09:50:46 np0005604375 ceph-mgr[80645]: mgr[py] Loading python module 'volumes'
Feb  1 09:50:46 np0005604375 python3[81805]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:50:46 np0005604375 podman[81840]: 2026-02-01 14:50:46.13732987 +0000 UTC m=+0.047169258 container create 8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b (image=quay.io/ceph/ceph:v20, name=practical_wright, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:46 np0005604375 podman[81833]: 2026-02-01 14:50:46.138629298 +0000 UTC m=+0.061370738 container died 7d94e38ece1e596fcae1a22687cbf142d9da3658707ee62f5feff59dab4a0686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  1 09:50:46 np0005604375 systemd[1]: var-lib-containers-storage-overlay-193743347c78a8686387b1b48b1e59b45f5b5e6c93d270c616f42c5638d1a374-merged.mount: Deactivated successfully.
Feb  1 09:50:46 np0005604375 podman[81833]: 2026-02-01 14:50:46.18664508 +0000 UTC m=+0.109386520 container remove 7d94e38ece1e596fcae1a22687cbf142d9da3658707ee62f5feff59dab4a0686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  1 09:50:46 np0005604375 bash[81833]: ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja
Feb  1 09:50:46 np0005604375 systemd[1]: Started libpod-conmon-8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b.scope.
Feb  1 09:50:46 np0005604375 systemd[1]: ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mgr.compute-0.rdxlja.service: Main process exited, code=exited, status=143/n/a
Feb  1 09:50:46 np0005604375 podman[81840]: 2026-02-01 14:50:46.115688759 +0000 UTC m=+0.025528177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:50:46 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:46 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39af6efb4cbc10bd5352d0b2e720561e668a5e5caf366ed148f29396e3a8384b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:46 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39af6efb4cbc10bd5352d0b2e720561e668a5e5caf366ed148f29396e3a8384b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:46 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39af6efb4cbc10bd5352d0b2e720561e668a5e5caf366ed148f29396e3a8384b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:46 np0005604375 podman[81840]: 2026-02-01 14:50:46.247847903 +0000 UTC m=+0.157687301 container init 8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b (image=quay.io/ceph/ceph:v20, name=practical_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:46 np0005604375 podman[81840]: 2026-02-01 14:50:46.25753379 +0000 UTC m=+0.167373208 container start 8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b (image=quay.io/ceph/ceph:v20, name=practical_wright, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  1 09:50:46 np0005604375 podman[81840]: 2026-02-01 14:50:46.261281791 +0000 UTC m=+0.171121209 container attach 8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b (image=quay.io/ceph/ceph:v20, name=practical_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:46 np0005604375 systemd[1]: ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mgr.compute-0.rdxlja.service: Failed with result 'exit-code'.
Feb  1 09:50:46 np0005604375 systemd[1]: Stopped Ceph mgr.compute-0.rdxlja for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:50:46 np0005604375 systemd[1]: ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mgr.compute-0.rdxlja.service: Consumed 6.542s CPU time, 440.9M memory peak, read 0B from disk, written 165.0K to disk.
Feb  1 09:50:46 np0005604375 systemd[1]: Reloading.
Feb  1 09:50:46 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:46 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:50:46 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: Added host compute-0
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: Saving service mon spec with placement compute-0
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: Saving service mgr spec with placement compute-0
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: Marking host: compute-0 for OSDSpec preview refresh.
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: Saving service osd.default_drive_group spec with placement compute-0
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: Removing daemon mgr.compute-0.rdxlja from compute-0 -- ports [8765]
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3217795026' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb  1 09:50:46 np0005604375 practical_wright[81875]: 
Feb  1 09:50:46 np0005604375 practical_wright[81875]: {"fsid":"2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":46,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-02-01T14:49:58:117399+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-01T14:49:58.120892+0000","services":{}},"progress_events":{"7e382260-f02f-41d2-9f2f-3ca8953bdb76":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Feb  1 09:50:46 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.rdxlja
Feb  1 09:50:46 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.rdxlja
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.rdxlja"} v 0)
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.rdxlja"} : dispatch
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.rdxlja"}]': finished
Feb  1 09:50:46 np0005604375 systemd[1]: libpod-8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b.scope: Deactivated successfully.
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  1 09:50:46 np0005604375 podman[81840]: 2026-02-01 14:50:46.797452449 +0000 UTC m=+0.707291867 container died 8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b (image=quay.io/ceph/ceph:v20, name=practical_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:46 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev 7e382260-f02f-41d2-9f2f-3ca8953bdb76 (Updating mgr deployment (-1 -> 1))
Feb  1 09:50:46 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event 7e382260-f02f-41d2-9f2f-3ca8953bdb76 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  1 09:50:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:46 np0005604375 systemd[1]: var-lib-containers-storage-overlay-39af6efb4cbc10bd5352d0b2e720561e668a5e5caf366ed148f29396e3a8384b-merged.mount: Deactivated successfully.
Feb  1 09:50:46 np0005604375 podman[81840]: 2026-02-01 14:50:46.843556205 +0000 UTC m=+0.753395603 container remove 8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b (image=quay.io/ceph/ceph:v20, name=practical_wright, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:46 np0005604375 systemd[1]: libpod-conmon-8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b.scope: Deactivated successfully.
Feb  1 09:50:47 np0005604375 podman[82104]: 2026-02-01 14:50:47.35969935 +0000 UTC m=+0.074710824 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:47 np0005604375 podman[82104]: 2026-02-01 14:50:47.449625713 +0000 UTC m=+0.164637107 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: Removing key for mgr.compute-0.rdxlja
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.rdxlja"} : dispatch
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.rdxlja"}]': finished
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:47 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:50:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:50:48 np0005604375 podman[82261]: 2026-02-01 14:50:48.29368855 +0000 UTC m=+0.051442694 container create def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  1 09:50:48 np0005604375 systemd[1]: Started libpod-conmon-def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de.scope.
Feb  1 09:50:48 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:48 np0005604375 podman[82261]: 2026-02-01 14:50:48.275975145 +0000 UTC m=+0.033729289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:50:48 np0005604375 podman[82261]: 2026-02-01 14:50:48.370465294 +0000 UTC m=+0.128219398 container init def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:48 np0005604375 podman[82261]: 2026-02-01 14:50:48.379389968 +0000 UTC m=+0.137144082 container start def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:50:48 np0005604375 podman[82261]: 2026-02-01 14:50:48.38349487 +0000 UTC m=+0.141248974 container attach def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  1 09:50:48 np0005604375 happy_gates[82277]: 167 167
Feb  1 09:50:48 np0005604375 systemd[1]: libpod-def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de.scope: Deactivated successfully.
Feb  1 09:50:48 np0005604375 conmon[82277]: conmon def19604c27e80895316 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de.scope/container/memory.events
Feb  1 09:50:48 np0005604375 podman[82261]: 2026-02-01 14:50:48.38518747 +0000 UTC m=+0.142941604 container died def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  1 09:50:48 np0005604375 systemd[1]: var-lib-containers-storage-overlay-6705f2502e4c1c7b7d9960725e80e33c24ee1210f6b9a537ca48d9ade2c83495-merged.mount: Deactivated successfully.
Feb  1 09:50:48 np0005604375 podman[82261]: 2026-02-01 14:50:48.424587507 +0000 UTC m=+0.182341621 container remove def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:48 np0005604375 systemd[1]: libpod-conmon-def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de.scope: Deactivated successfully.
Feb  1 09:50:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:50:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:50:48 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:48 np0005604375 ceph-mgr[75469]: [progress INFO root] Writing back 3 completed events
Feb  1 09:50:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  1 09:50:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:50:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:50:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:50:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:50:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:48 np0005604375 podman[82300]: 2026-02-01 14:50:48.591752277 +0000 UTC m=+0.057943637 container create 364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  1 09:50:48 np0005604375 systemd[1]: Started libpod-conmon-364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba.scope.
Feb  1 09:50:48 np0005604375 podman[82300]: 2026-02-01 14:50:48.563187161 +0000 UTC m=+0.029378601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:50:48 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:48 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb1e1be19fa708a493b651aac111b3a261636da2173ab9e6ec4b01607569e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:48 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb1e1be19fa708a493b651aac111b3a261636da2173ab9e6ec4b01607569e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:48 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb1e1be19fa708a493b651aac111b3a261636da2173ab9e6ec4b01607569e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:48 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb1e1be19fa708a493b651aac111b3a261636da2173ab9e6ec4b01607569e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:48 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb1e1be19fa708a493b651aac111b3a261636da2173ab9e6ec4b01607569e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:48 np0005604375 podman[82300]: 2026-02-01 14:50:48.69049811 +0000 UTC m=+0.156689460 container init 364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:48 np0005604375 podman[82300]: 2026-02-01 14:50:48.697446256 +0000 UTC m=+0.163637616 container start 364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:48 np0005604375 podman[82300]: 2026-02-01 14:50:48.701540467 +0000 UTC m=+0.167731817 container attach 364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:50:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:50:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:50:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:50:49 np0005604375 magical_stonebraker[82316]: --> passed data devices: 0 physical, 3 LVM
Feb  1 09:50:49 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:50:49 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:50:49 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e67ca44a-7e61-43f9-bf2b-cf15de50303a
Feb  1 09:50:49 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:50:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a"} v 0)
Feb  1 09:50:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/58118942' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a"} : dispatch
Feb  1 09:50:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Feb  1 09:50:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  1 09:50:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/58118942' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a"}]': finished
Feb  1 09:50:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Feb  1 09:50:49 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Feb  1 09:50:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  1 09:50:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  1 09:50:49 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  1 09:50:50 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Feb  1 09:50:50 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Feb  1 09:50:50 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  1 09:50:50 np0005604375 lvm[82408]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:50:50 np0005604375 lvm[82408]: VG ceph_vg0 finished
Feb  1 09:50:50 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb  1 09:50:50 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Feb  1 09:50:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:50:50 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb  1 09:50:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4247501042' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb  1 09:50:50 np0005604375 magical_stonebraker[82316]: stderr: got monmap epoch 1
Feb  1 09:50:50 np0005604375 magical_stonebraker[82316]: --> Creating keyring file for osd.0
Feb  1 09:50:50 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Feb  1 09:50:50 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Feb  1 09:50:50 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid e67ca44a-7e61-43f9-bf2b-cf15de50303a --setuser ceph --setgroup ceph
Feb  1 09:50:50 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/58118942' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a"} : dispatch
Feb  1 09:50:50 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/58118942' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a"}]': finished
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: stderr: 2026-02-01T14:50:50.681+0000 7f8440e778c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: stderr: 2026-02-01T14:50:50.706+0000 7f8440e778c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: --> ceph-volume lvm activate successful for osd ID: 0
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:50:51 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:50:51 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new fd39fcf7-28de-4953-80ed-edf6e0aa6fd0
Feb  1 09:50:51 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb  1 09:50:51 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb  1 09:50:51 np0005604375 ceph-mon[75179]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb  1 09:50:51 np0005604375 ceph-mon[75179]: Cluster is now healthy
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0"} v 0)
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3376588258' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0"} : dispatch
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3376588258' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0"}]': finished
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  1 09:50:52 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  1 09:50:52 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  1 09:50:52 np0005604375 lvm[83352]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:50:52 np0005604375 lvm[83352]: VG ceph_vg1 finished
Feb  1 09:50:52 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Feb  1 09:50:52 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Feb  1 09:50:52 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb  1 09:50:52 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb  1 09:50:52 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Feb  1 09:50:52 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605107193' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb  1 09:50:52 np0005604375 magical_stonebraker[82316]: stderr: got monmap epoch 1
Feb  1 09:50:52 np0005604375 magical_stonebraker[82316]: --> Creating keyring file for osd.1
Feb  1 09:50:52 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Feb  1 09:50:52 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Feb  1 09:50:52 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid fd39fcf7-28de-4953-80ed-edf6e0aa6fd0 --setuser ceph --setgroup ceph
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/3376588258' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0"} : dispatch
Feb  1 09:50:52 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/3376588258' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0"}]': finished
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: stderr: 2026-02-01T14:50:52.903+0000 7f06f561e8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: stderr: 2026-02-01T14:50:52.932+0000 7f06f561e8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Feb  1 09:50:53 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: --> ceph-volume lvm activate successful for osd ID: 1
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:50:53 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 7fabf513-99fe-4b35-b072-3f0e487337b7
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "7fabf513-99fe-4b35-b072-3f0e487337b7"} v 0)
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2133326794' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "7fabf513-99fe-4b35-b072-3f0e487337b7"} : dispatch
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2133326794' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7fabf513-99fe-4b35-b072-3f0e487337b7"}]': finished
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:50:54 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:50:54 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  1 09:50:54 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:50:54 np0005604375 lvm[84296]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:50:54 np0005604375 lvm[84296]: VG ceph_vg2 finished
Feb  1 09:50:54 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Feb  1 09:50:54 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Feb  1 09:50:54 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb  1 09:50:54 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb  1 09:50:54 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Feb  1 09:50:54 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1472763208' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb  1 09:50:54 np0005604375 magical_stonebraker[82316]: stderr: got monmap epoch 1
Feb  1 09:50:54 np0005604375 magical_stonebraker[82316]: --> Creating keyring file for osd.2
Feb  1 09:50:54 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Feb  1 09:50:54 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Feb  1 09:50:54 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 7fabf513-99fe-4b35-b072-3f0e487337b7 --setuser ceph --setgroup ceph
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2133326794' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "7fabf513-99fe-4b35-b072-3f0e487337b7"} : dispatch
Feb  1 09:50:54 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2133326794' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7fabf513-99fe-4b35-b072-3f0e487337b7"}]': finished
Feb  1 09:50:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:50:55 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:50:55 np0005604375 magical_stonebraker[82316]: stderr: 2026-02-01T14:50:54.985+0000 7fb1207cc8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Feb  1 09:50:55 np0005604375 magical_stonebraker[82316]: stderr: 2026-02-01T14:50:55.005+0000 7fb1207cc8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Feb  1 09:50:55 np0005604375 magical_stonebraker[82316]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Feb  1 09:50:55 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  1 09:50:55 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb  1 09:50:55 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb  1 09:50:55 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb  1 09:50:55 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb  1 09:50:55 np0005604375 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  1 09:50:55 np0005604375 magical_stonebraker[82316]: --> ceph-volume lvm activate successful for osd ID: 2
Feb  1 09:50:55 np0005604375 magical_stonebraker[82316]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Feb  1 09:50:56 np0005604375 systemd[1]: libpod-364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba.scope: Deactivated successfully.
Feb  1 09:50:56 np0005604375 systemd[1]: libpod-364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba.scope: Consumed 5.764s CPU time.
Feb  1 09:50:56 np0005604375 podman[85212]: 2026-02-01 14:50:56.063953791 +0000 UTC m=+0.031461752 container died 364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:56 np0005604375 systemd[1]: var-lib-containers-storage-overlay-cebb1e1be19fa708a493b651aac111b3a261636da2173ab9e6ec4b01607569e9-merged.mount: Deactivated successfully.
Feb  1 09:50:56 np0005604375 podman[85212]: 2026-02-01 14:50:56.10681247 +0000 UTC m=+0.074320431 container remove 364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  1 09:50:56 np0005604375 systemd[1]: libpod-conmon-364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba.scope: Deactivated successfully.
Feb  1 09:50:56 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:56 np0005604375 podman[85291]: 2026-02-01 14:50:56.620134462 +0000 UTC m=+0.055981078 container create 49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  1 09:50:56 np0005604375 systemd[1]: Started libpod-conmon-49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1.scope.
Feb  1 09:50:56 np0005604375 podman[85291]: 2026-02-01 14:50:56.592324019 +0000 UTC m=+0.028170685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:50:56 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:56 np0005604375 podman[85291]: 2026-02-01 14:50:56.722508594 +0000 UTC m=+0.158355200 container init 49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jepsen, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  1 09:50:56 np0005604375 podman[85291]: 2026-02-01 14:50:56.730775519 +0000 UTC m=+0.166622095 container start 49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jepsen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:56 np0005604375 podman[85291]: 2026-02-01 14:50:56.734443358 +0000 UTC m=+0.170289984 container attach 49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jepsen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  1 09:50:56 np0005604375 nostalgic_jepsen[85308]: 167 167
Feb  1 09:50:56 np0005604375 systemd[1]: libpod-49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1.scope: Deactivated successfully.
Feb  1 09:50:56 np0005604375 podman[85291]: 2026-02-01 14:50:56.738053885 +0000 UTC m=+0.173900501 container died 49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jepsen, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:56 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c428f6188108df1ec401a9f80a8843fe3c1b814b2e04875658ff4c10cf42b49c-merged.mount: Deactivated successfully.
Feb  1 09:50:56 np0005604375 podman[85291]: 2026-02-01 14:50:56.778931075 +0000 UTC m=+0.214777691 container remove 49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jepsen, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  1 09:50:56 np0005604375 systemd[1]: libpod-conmon-49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1.scope: Deactivated successfully.
Feb  1 09:50:56 np0005604375 podman[85332]: 2026-02-01 14:50:56.958015669 +0000 UTC m=+0.062883874 container create 8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:50:57 np0005604375 systemd[1]: Started libpod-conmon-8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8.scope.
Feb  1 09:50:57 np0005604375 podman[85332]: 2026-02-01 14:50:56.932010319 +0000 UTC m=+0.036878594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:50:57 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:57 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfc81ceead40937e7ac11ea73b58a177204afbe2da2f6d3e3093e84464c8ac4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:57 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfc81ceead40937e7ac11ea73b58a177204afbe2da2f6d3e3093e84464c8ac4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:57 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfc81ceead40937e7ac11ea73b58a177204afbe2da2f6d3e3093e84464c8ac4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:57 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfc81ceead40937e7ac11ea73b58a177204afbe2da2f6d3e3093e84464c8ac4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:57 np0005604375 podman[85332]: 2026-02-01 14:50:57.052124776 +0000 UTC m=+0.156993001 container init 8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  1 09:50:57 np0005604375 podman[85332]: 2026-02-01 14:50:57.060804983 +0000 UTC m=+0.165673178 container start 8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Feb  1 09:50:57 np0005604375 podman[85332]: 2026-02-01 14:50:57.063469502 +0000 UTC m=+0.168337707 container attach 8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  1 09:50:57 np0005604375 cool_merkle[85349]: {
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:    "0": [
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:        {
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "devices": [
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "/dev/loop3"
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            ],
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_name": "ceph_lv0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_size": "21470642176",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "name": "ceph_lv0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "tags": {
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.cluster_name": "ceph",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.crush_device_class": "",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.encrypted": "0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.objectstore": "bluestore",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.osd_id": "0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.type": "block",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.vdo": "0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.with_tpm": "0"
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            },
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "type": "block",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "vg_name": "ceph_vg0"
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:        }
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:    ],
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:    "1": [
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:        {
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "devices": [
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "/dev/loop4"
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            ],
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_name": "ceph_lv1",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_size": "21470642176",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "name": "ceph_lv1",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "tags": {
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.cluster_name": "ceph",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.crush_device_class": "",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.encrypted": "0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.objectstore": "bluestore",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.osd_id": "1",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.type": "block",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.vdo": "0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.with_tpm": "0"
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            },
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "type": "block",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "vg_name": "ceph_vg1"
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:        }
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:    ],
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:    "2": [
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:        {
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "devices": [
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "/dev/loop5"
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            ],
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_name": "ceph_lv2",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_size": "21470642176",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "name": "ceph_lv2",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "tags": {
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.cluster_name": "ceph",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.crush_device_class": "",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.encrypted": "0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.objectstore": "bluestore",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.osd_id": "2",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.type": "block",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.vdo": "0",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:                "ceph.with_tpm": "0"
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            },
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "type": "block",
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:            "vg_name": "ceph_vg2"
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:        }
Feb  1 09:50:57 np0005604375 cool_merkle[85349]:    ]
Feb  1 09:50:57 np0005604375 cool_merkle[85349]: }
Feb  1 09:50:57 np0005604375 systemd[1]: libpod-8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8.scope: Deactivated successfully.
Feb  1 09:50:57 np0005604375 podman[85332]: 2026-02-01 14:50:57.386964082 +0000 UTC m=+0.491832327 container died 8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_merkle, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  1 09:50:57 np0005604375 systemd[1]: var-lib-containers-storage-overlay-4bfc81ceead40937e7ac11ea73b58a177204afbe2da2f6d3e3093e84464c8ac4-merged.mount: Deactivated successfully.
Feb  1 09:50:57 np0005604375 podman[85332]: 2026-02-01 14:50:57.443929479 +0000 UTC m=+0.548797654 container remove 8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_merkle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:57 np0005604375 systemd[1]: libpod-conmon-8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8.scope: Deactivated successfully.
Feb  1 09:50:57 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Feb  1 09:50:57 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb  1 09:50:57 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:50:57 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:50:57 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Feb  1 09:50:57 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Feb  1 09:50:57 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:50:57 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb  1 09:50:58 np0005604375 podman[85461]: 2026-02-01 14:50:58.001100959 +0000 UTC m=+0.063783890 container create 2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  1 09:50:58 np0005604375 systemd[1]: Started libpod-conmon-2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5.scope.
Feb  1 09:50:58 np0005604375 podman[85461]: 2026-02-01 14:50:57.972931675 +0000 UTC m=+0.035614636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:50:58 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:58 np0005604375 podman[85461]: 2026-02-01 14:50:58.081979695 +0000 UTC m=+0.144662596 container init 2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  1 09:50:58 np0005604375 podman[85461]: 2026-02-01 14:50:58.091128106 +0000 UTC m=+0.153811017 container start 2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  1 09:50:58 np0005604375 clever_wozniak[85477]: 167 167
Feb  1 09:50:58 np0005604375 podman[85461]: 2026-02-01 14:50:58.09465269 +0000 UTC m=+0.157335611 container attach 2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  1 09:50:58 np0005604375 systemd[1]: libpod-2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5.scope: Deactivated successfully.
Feb  1 09:50:58 np0005604375 podman[85461]: 2026-02-01 14:50:58.095573227 +0000 UTC m=+0.158256128 container died 2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:50:58 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f2c4a5bb6a9b4d0dc3cf9a29a524c86c7c8db6cffa6ce0c6909942d79ce76c7f-merged.mount: Deactivated successfully.
Feb  1 09:50:58 np0005604375 podman[85461]: 2026-02-01 14:50:58.13111742 +0000 UTC m=+0.193800321 container remove 2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  1 09:50:58 np0005604375 systemd[1]: libpod-conmon-2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5.scope: Deactivated successfully.
Feb  1 09:50:58 np0005604375 podman[85506]: 2026-02-01 14:50:58.332878735 +0000 UTC m=+0.045371205 container create 91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  1 09:50:58 np0005604375 systemd[1]: Started libpod-conmon-91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e.scope.
Feb  1 09:50:58 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86253a548ac47ba5e192c29c568fb2d7710b5ba1a6649ac45ba952dcbaf741f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86253a548ac47ba5e192c29c568fb2d7710b5ba1a6649ac45ba952dcbaf741f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86253a548ac47ba5e192c29c568fb2d7710b5ba1a6649ac45ba952dcbaf741f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86253a548ac47ba5e192c29c568fb2d7710b5ba1a6649ac45ba952dcbaf741f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86253a548ac47ba5e192c29c568fb2d7710b5ba1a6649ac45ba952dcbaf741f0/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:58 np0005604375 podman[85506]: 2026-02-01 14:50:58.308092251 +0000 UTC m=+0.020584791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:50:58 np0005604375 podman[85506]: 2026-02-01 14:50:58.411750041 +0000 UTC m=+0.124242511 container init 91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  1 09:50:58 np0005604375 podman[85506]: 2026-02-01 14:50:58.423320233 +0000 UTC m=+0.135812663 container start 91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:50:58 np0005604375 podman[85506]: 2026-02-01 14:50:58.427004073 +0000 UTC m=+0.139496573 container attach 91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:58 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:50:58 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test[85522]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb  1 09:50:58 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test[85522]:                            [--no-systemd] [--no-tmpfs]
Feb  1 09:50:58 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test[85522]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb  1 09:50:58 np0005604375 systemd[1]: libpod-91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e.scope: Deactivated successfully.
Feb  1 09:50:58 np0005604375 podman[85506]: 2026-02-01 14:50:58.650738428 +0000 UTC m=+0.363230868 container died 91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:50:58 np0005604375 systemd[1]: var-lib-containers-storage-overlay-86253a548ac47ba5e192c29c568fb2d7710b5ba1a6649ac45ba952dcbaf741f0-merged.mount: Deactivated successfully.
Feb  1 09:50:58 np0005604375 podman[85506]: 2026-02-01 14:50:58.693322199 +0000 UTC m=+0.405814639 container remove 91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  1 09:50:58 np0005604375 systemd[1]: libpod-conmon-91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e.scope: Deactivated successfully.
Feb  1 09:50:58 np0005604375 systemd[1]: Reloading.
Feb  1 09:50:58 np0005604375 ceph-mon[75179]: Deploying daemon osd.0 on compute-0
Feb  1 09:50:58 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:50:58 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:50:59 np0005604375 systemd[1]: Reloading.
Feb  1 09:50:59 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:50:59 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:50:59 np0005604375 systemd[1]: Starting Ceph osd.0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb  1 09:50:59 np0005604375 podman[85679]: 2026-02-01 14:50:59.512091246 +0000 UTC m=+0.033467382 container create 8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  1 09:50:59 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:50:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bad0bd724e1c9ba89042f9a0b57510546bdca0779e6d1bec40040a428fd9c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bad0bd724e1c9ba89042f9a0b57510546bdca0779e6d1bec40040a428fd9c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bad0bd724e1c9ba89042f9a0b57510546bdca0779e6d1bec40040a428fd9c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bad0bd724e1c9ba89042f9a0b57510546bdca0779e6d1bec40040a428fd9c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bad0bd724e1c9ba89042f9a0b57510546bdca0779e6d1bec40040a428fd9c2/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:50:59 np0005604375 podman[85679]: 2026-02-01 14:50:59.580934785 +0000 UTC m=+0.102310961 container init 8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Feb  1 09:50:59 np0005604375 podman[85679]: 2026-02-01 14:50:59.589019834 +0000 UTC m=+0.110395980 container start 8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:50:59 np0005604375 podman[85679]: 2026-02-01 14:50:59.49633379 +0000 UTC m=+0.017709956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:50:59 np0005604375 podman[85679]: 2026-02-01 14:50:59.592683173 +0000 UTC m=+0.114059359 container attach 8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 09:50:59 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:50:59 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:50:59 np0005604375 bash[85679]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:50:59 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:50:59 np0005604375 bash[85679]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:00 np0005604375 lvm[85775]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:51:00 np0005604375 lvm[85775]: VG ceph_vg0 finished
Feb  1 09:51:00 np0005604375 lvm[85778]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:51:00 np0005604375 lvm[85778]: VG ceph_vg1 finished
Feb  1 09:51:00 np0005604375 lvm[85780]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:51:00 np0005604375 lvm[85780]: VG ceph_vg2 finished
Feb  1 09:51:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  1 09:51:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:00 np0005604375 bash[85679]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  1 09:51:00 np0005604375 bash[85679]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:51:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:00 np0005604375 bash[85679]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:00 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:51:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  1 09:51:00 np0005604375 bash[85679]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  1 09:51:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb  1 09:51:00 np0005604375 bash[85679]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb  1 09:51:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:00 np0005604375 bash[85679]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:00 np0005604375 bash[85679]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  1 09:51:00 np0005604375 bash[85679]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  1 09:51:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  1 09:51:00 np0005604375 bash[85679]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  1 09:51:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: --> ceph-volume lvm activate successful for osd ID: 0
Feb  1 09:51:00 np0005604375 bash[85679]: --> ceph-volume lvm activate successful for osd ID: 0
Feb  1 09:51:00 np0005604375 systemd[1]: libpod-8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b.scope: Deactivated successfully.
Feb  1 09:51:00 np0005604375 podman[85679]: 2026-02-01 14:51:00.642626877 +0000 UTC m=+1.164003013 container died 8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Feb  1 09:51:00 np0005604375 systemd[1]: libpod-8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b.scope: Consumed 1.241s CPU time.
Feb  1 09:51:00 np0005604375 systemd[1]: var-lib-containers-storage-overlay-d0bad0bd724e1c9ba89042f9a0b57510546bdca0779e6d1bec40040a428fd9c2-merged.mount: Deactivated successfully.
Feb  1 09:51:00 np0005604375 podman[85679]: 2026-02-01 14:51:00.67986354 +0000 UTC m=+1.201239706 container remove 8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  1 09:51:00 np0005604375 podman[85950]: 2026-02-01 14:51:00.872025961 +0000 UTC m=+0.039597334 container create 88ca06885fff5877af0746f2bfb486b74d1bbfecc90ba45ce08268f6f0c5c87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f569bf096b637537c69eedc48395d4e4145d0dacd8e7a7253f5cf63b9f527f3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f569bf096b637537c69eedc48395d4e4145d0dacd8e7a7253f5cf63b9f527f3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f569bf096b637537c69eedc48395d4e4145d0dacd8e7a7253f5cf63b9f527f3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f569bf096b637537c69eedc48395d4e4145d0dacd8e7a7253f5cf63b9f527f3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f569bf096b637537c69eedc48395d4e4145d0dacd8e7a7253f5cf63b9f527f3d/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:00 np0005604375 podman[85950]: 2026-02-01 14:51:00.91420524 +0000 UTC m=+0.081776593 container init 88ca06885fff5877af0746f2bfb486b74d1bbfecc90ba45ce08268f6f0c5c87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  1 09:51:00 np0005604375 podman[85950]: 2026-02-01 14:51:00.919884408 +0000 UTC m=+0.087455751 container start 88ca06885fff5877af0746f2bfb486b74d1bbfecc90ba45ce08268f6f0c5c87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:00 np0005604375 bash[85950]: 88ca06885fff5877af0746f2bfb486b74d1bbfecc90ba45ce08268f6f0c5c87a
Feb  1 09:51:00 np0005604375 podman[85950]: 2026-02-01 14:51:00.855064628 +0000 UTC m=+0.022635991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:00 np0005604375 systemd[1]: Started Ceph osd.0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:51:00 np0005604375 ceph-osd[85969]: set uid:gid to 167:167 (ceph:ceph)
Feb  1 09:51:00 np0005604375 ceph-osd[85969]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb  1 09:51:00 np0005604375 ceph-osd[85969]: pidfile_write: ignore empty --pid-file
Feb  1 09:51:00 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:00 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:00 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:00 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:00 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) close
Feb  1 09:51:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:00 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:00 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:00 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:00 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:00 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) close
Feb  1 09:51:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Feb  1 09:51:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb  1 09:51:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:00 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Feb  1 09:51:00 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) close
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) close
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) close
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e400 /var/lib/ceph/osd/ceph-0/block) close
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) close
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: load: jerasure load: lrc 
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) close
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) close
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) close
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) close
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) close
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount shared_bdev_used = 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: RocksDB version: 7.9.2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Git sha 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: DB SUMMARY
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: DB Session ID:  WQ9Z5ULV32HB55I5VYO8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: CURRENT file:  CURRENT
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: IDENTITY file:  IDENTITY
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                         Options.error_if_exists: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.create_if_missing: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                         Options.paranoid_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                                     Options.env: 0x563b6121fea0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                                Options.info_log: 0x563b622708a0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_file_opening_threads: 16
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                              Options.statistics: (nil)
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.use_fsync: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.max_log_file_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                         Options.allow_fallocate: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.use_direct_reads: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.create_missing_column_families: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                              Options.db_log_dir: 
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                                 Options.wal_dir: db.wal
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.advise_random_on_open: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.write_buffer_manager: 0x563b61284b40
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                            Options.rate_limiter: (nil)
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.unordered_write: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.row_cache: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                              Options.wal_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.allow_ingest_behind: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.two_write_queues: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.manual_wal_flush: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.wal_compression: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.atomic_flush: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.log_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.allow_data_in_errors: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.db_host_id: __hostname__
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.max_background_jobs: 4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.max_background_compactions: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.max_subcompactions: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.max_open_files: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.bytes_per_sync: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.max_background_flushes: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Compression algorithms supported:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kZSTD supported: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kXpressCompression supported: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kBZip2Compression supported: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kLZ4Compression supported: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kZlibCompression supported: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kSnappyCompression supported: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b61223a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b61223a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b61223a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1ec2841c-47a7-4a7e-b481-3d3f5da60a1c
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461313862, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461315145, "job": 1, "event": "recovery_finished"}
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: freelist init
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: freelist _read_cfg
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs umount
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) close
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluefs mount shared_bdev_used = 27262976
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: RocksDB version: 7.9.2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Git sha 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: DB SUMMARY
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: DB Session ID:  WQ9Z5ULV32HB55I5VYO9
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: CURRENT file:  CURRENT
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: IDENTITY file:  IDENTITY
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                         Options.error_if_exists: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.create_if_missing: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                         Options.paranoid_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                                     Options.env: 0x563b6121fce0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                                Options.info_log: 0x563b62270960
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_file_opening_threads: 16
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                              Options.statistics: (nil)
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.use_fsync: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.max_log_file_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                         Options.allow_fallocate: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.use_direct_reads: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.create_missing_column_families: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                              Options.db_log_dir: 
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                                 Options.wal_dir: db.wal
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.advise_random_on_open: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.write_buffer_manager: 0x563b61285900
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                            Options.rate_limiter: (nil)
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.unordered_write: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.row_cache: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                              Options.wal_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.allow_ingest_behind: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.two_write_queues: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.manual_wal_flush: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.wal_compression: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.atomic_flush: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.log_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.allow_data_in_errors: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.db_host_id: __hostname__
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.max_background_jobs: 4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.max_background_compactions: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.max_subcompactions: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.max_open_files: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.bytes_per_sync: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.max_background_flushes: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Compression algorithms supported:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kZSTD supported: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kXpressCompression supported: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kBZip2Compression supported: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kLZ4Compression supported: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kZlibCompression supported: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: #011kSnappyCompression supported: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b612238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271d80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b61223a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271d80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b61223a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271d80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b61223a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1ec2841c-47a7-4a7e-b481-3d3f5da60a1c
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461347692, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461351939, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957461, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2841c-47a7-4a7e-b481-3d3f5da60a1c", "db_session_id": "WQ9Z5ULV32HB55I5VYO9", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461354875, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957461, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2841c-47a7-4a7e-b481-3d3f5da60a1c", "db_session_id": "WQ9Z5ULV32HB55I5VYO9", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461357312, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957461, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2841c-47a7-4a7e-b481-3d3f5da60a1c", "db_session_id": "WQ9Z5ULV32HB55I5VYO9", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461358612, "job": 1, "event": "recovery_finished"}
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563b6248a000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: DB pointer 0x563b6242a000
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 460.80 MB usage: 0
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: _get_class not permitted to load lua
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: _get_class not permitted to load sdk
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: osd.0 0 load_pgs
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: osd.0 0 load_pgs opened 0 pgs
Feb  1 09:51:01 np0005604375 ceph-osd[85969]: osd.0 0 log_to_monitors true
Feb  1 09:51:01 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0[85965]: 2026-02-01T14:51:01.383+0000 7f96e46208c0 -1 osd.0 0 log_to_monitors true
Feb  1 09:51:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Feb  1 09:51:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb  1 09:51:01 np0005604375 podman[86507]: 2026-02-01 14:51:01.451726159 +0000 UTC m=+0.033831293 container create 9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_elion, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  1 09:51:01 np0005604375 systemd[1]: Started libpod-conmon-9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a.scope.
Feb  1 09:51:01 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:01 np0005604375 podman[86507]: 2026-02-01 14:51:01.507241163 +0000 UTC m=+0.089346297 container init 9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_elion, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  1 09:51:01 np0005604375 podman[86507]: 2026-02-01 14:51:01.511767487 +0000 UTC m=+0.093872621 container start 9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_elion, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  1 09:51:01 np0005604375 podman[86507]: 2026-02-01 14:51:01.515029214 +0000 UTC m=+0.097134378 container attach 9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:01 np0005604375 gifted_elion[86524]: 167 167
Feb  1 09:51:01 np0005604375 systemd[1]: libpod-9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a.scope: Deactivated successfully.
Feb  1 09:51:01 np0005604375 conmon[86524]: conmon 9cff356744490323da45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a.scope/container/memory.events
Feb  1 09:51:01 np0005604375 podman[86507]: 2026-02-01 14:51:01.435106117 +0000 UTC m=+0.017211271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:01 np0005604375 podman[86529]: 2026-02-01 14:51:01.546060693 +0000 UTC m=+0.019864659 container died 9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  1 09:51:01 np0005604375 systemd[1]: var-lib-containers-storage-overlay-11ea1f2011328170ce06779371618b61b185203fecc1f11724b26a1c3f510067-merged.mount: Deactivated successfully.
Feb  1 09:51:01 np0005604375 podman[86529]: 2026-02-01 14:51:01.573515836 +0000 UTC m=+0.047319802 container remove 9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_elion, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  1 09:51:01 np0005604375 systemd[1]: libpod-conmon-9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a.scope: Deactivated successfully.
Feb  1 09:51:01 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:51:01 np0005604375 podman[86554]: 2026-02-01 14:51:01.719934622 +0000 UTC m=+0.028880816 container create 973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:01 np0005604375 systemd[1]: Started libpod-conmon-973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08.scope.
Feb  1 09:51:01 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/224751a40f08f1c7e30fcbac1ff41b2bc60a77f89cb694ccc4650444d7f62a23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/224751a40f08f1c7e30fcbac1ff41b2bc60a77f89cb694ccc4650444d7f62a23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/224751a40f08f1c7e30fcbac1ff41b2bc60a77f89cb694ccc4650444d7f62a23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/224751a40f08f1c7e30fcbac1ff41b2bc60a77f89cb694ccc4650444d7f62a23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/224751a40f08f1c7e30fcbac1ff41b2bc60a77f89cb694ccc4650444d7f62a23/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:01 np0005604375 podman[86554]: 2026-02-01 14:51:01.773086266 +0000 UTC m=+0.082032470 container init 973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  1 09:51:01 np0005604375 podman[86554]: 2026-02-01 14:51:01.781835636 +0000 UTC m=+0.090781850 container start 973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:01 np0005604375 podman[86554]: 2026-02-01 14:51:01.785303098 +0000 UTC m=+0.094249292 container attach 973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 09:51:01 np0005604375 podman[86554]: 2026-02-01 14:51:01.707732851 +0000 UTC m=+0.016679065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:01 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test[86570]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb  1 09:51:01 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test[86570]:                            [--no-systemd] [--no-tmpfs]
Feb  1 09:51:01 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test[86570]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb  1 09:51:01 np0005604375 systemd[1]: libpod-973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08.scope: Deactivated successfully.
Feb  1 09:51:01 np0005604375 podman[86554]: 2026-02-01 14:51:01.934592799 +0000 UTC m=+0.243539033 container died 973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  1 09:51:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Feb  1 09:51:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  1 09:51:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb  1 09:51:01 np0005604375 ceph-mon[75179]: Deploying daemon osd.1 on compute-0
Feb  1 09:51:01 np0005604375 ceph-mon[75179]: from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb  1 09:51:02 np0005604375 systemd[1]: var-lib-containers-storage-overlay-224751a40f08f1c7e30fcbac1ff41b2bc60a77f89cb694ccc4650444d7f62a23-merged.mount: Deactivated successfully.
Feb  1 09:51:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb  1 09:51:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Feb  1 09:51:02 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Feb  1 09:51:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb  1 09:51:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  1 09:51:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Feb  1 09:51:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  1 09:51:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  1 09:51:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  1 09:51:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  1 09:51:02 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  1 09:51:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:02 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  1 09:51:02 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:51:02 np0005604375 podman[86554]: 2026-02-01 14:51:02.017455753 +0000 UTC m=+0.326401957 container remove 973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:02 np0005604375 systemd[1]: libpod-conmon-973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08.scope: Deactivated successfully.
Feb  1 09:51:02 np0005604375 systemd[1]: Reloading.
Feb  1 09:51:02 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:51:02 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:51:02 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb  1 09:51:02 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb  1 09:51:02 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:51:02 np0005604375 systemd[1]: Reloading.
Feb  1 09:51:02 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:51:02 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:51:02 np0005604375 systemd[1]: Starting Ceph osd.1 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb  1 09:51:02 np0005604375 podman[86733]: 2026-02-01 14:51:02.961543422 +0000 UTC m=+0.047922160 container create c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Feb  1 09:51:03 np0005604375 ceph-osd[85969]: osd.0 0 done with init, starting boot process
Feb  1 09:51:03 np0005604375 ceph-osd[85969]: osd.0 0 start_boot
Feb  1 09:51:03 np0005604375 ceph-osd[85969]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb  1 09:51:03 np0005604375 ceph-osd[85969]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb  1 09:51:03 np0005604375 ceph-osd[85969]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb  1 09:51:03 np0005604375 ceph-osd[85969]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb  1 09:51:03 np0005604375 ceph-osd[85969]: osd.0 0  bench count 12288000 bsize 4 KiB
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:03 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  1 09:51:03 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:51:03 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:03 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:03 np0005604375 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1631172060; not ready for session (expect reconnect)
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  1 09:51:03 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  1 09:51:03 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565f38ba7684cfc16bc6c5da46d6e9fab53f8bcd56a473cc2a7a4c2fcc3490d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:03 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565f38ba7684cfc16bc6c5da46d6e9fab53f8bcd56a473cc2a7a4c2fcc3490d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:03 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565f38ba7684cfc16bc6c5da46d6e9fab53f8bcd56a473cc2a7a4c2fcc3490d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:03 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565f38ba7684cfc16bc6c5da46d6e9fab53f8bcd56a473cc2a7a4c2fcc3490d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:03 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565f38ba7684cfc16bc6c5da46d6e9fab53f8bcd56a473cc2a7a4c2fcc3490d4/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb  1 09:51:03 np0005604375 ceph-mon[75179]: from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  1 09:51:03 np0005604375 podman[86733]: 2026-02-01 14:51:02.941794307 +0000 UTC m=+0.028173035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:03 np0005604375 podman[86733]: 2026-02-01 14:51:03.041427367 +0000 UTC m=+0.127806095 container init c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:03 np0005604375 podman[86733]: 2026-02-01 14:51:03.053824914 +0000 UTC m=+0.140203612 container start c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:03 np0005604375 podman[86733]: 2026-02-01 14:51:03.070872839 +0000 UTC m=+0.157251567 container attach c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 09:51:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:03 np0005604375 bash[86733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:03 np0005604375 bash[86733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:03 np0005604375 lvm[86831]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:51:03 np0005604375 lvm[86831]: VG ceph_vg0 finished
Feb  1 09:51:03 np0005604375 lvm[86834]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:51:03 np0005604375 lvm[86834]: VG ceph_vg1 finished
Feb  1 09:51:03 np0005604375 lvm[86836]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:51:03 np0005604375 lvm[86836]: VG ceph_vg2 finished
Feb  1 09:51:03 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:51:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  1 09:51:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:03 np0005604375 bash[86733]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  1 09:51:03 np0005604375 bash[86733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:03 np0005604375 bash[86733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  1 09:51:03 np0005604375 bash[86733]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  1 09:51:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb  1 09:51:03 np0005604375 bash[86733]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb  1 09:51:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:03 np0005604375 bash[86733]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:03 np0005604375 bash[86733]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb  1 09:51:03 np0005604375 bash[86733]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb  1 09:51:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  1 09:51:03 np0005604375 bash[86733]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  1 09:51:04 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: --> ceph-volume lvm activate successful for osd ID: 1
Feb  1 09:51:04 np0005604375 bash[86733]: --> ceph-volume lvm activate successful for osd ID: 1
Feb  1 09:51:04 np0005604375 systemd[1]: libpod-c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1.scope: Deactivated successfully.
Feb  1 09:51:04 np0005604375 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1631172060; not ready for session (expect reconnect)
Feb  1 09:51:04 np0005604375 systemd[1]: libpod-c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1.scope: Consumed 1.174s CPU time.
Feb  1 09:51:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  1 09:51:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  1 09:51:04 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  1 09:51:04 np0005604375 ceph-mon[75179]: from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  1 09:51:04 np0005604375 podman[86932]: 2026-02-01 14:51:04.055044894 +0000 UTC m=+0.020457736 container died c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:04 np0005604375 systemd[1]: var-lib-containers-storage-overlay-565f38ba7684cfc16bc6c5da46d6e9fab53f8bcd56a473cc2a7a4c2fcc3490d4-merged.mount: Deactivated successfully.
Feb  1 09:51:04 np0005604375 podman[86932]: 2026-02-01 14:51:04.149475901 +0000 UTC m=+0.114888723 container remove c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:04 np0005604375 podman[86991]: 2026-02-01 14:51:04.303254455 +0000 UTC m=+0.040849141 container create 751c852b5ece59ea81e1fdc2a19e739eff9c738c284ce0a5ed502314ad0a4720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  1 09:51:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e46a8b1caa92540a80a4350e46370f995da455202f511a4fc6ddb23d8e4107/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e46a8b1caa92540a80a4350e46370f995da455202f511a4fc6ddb23d8e4107/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e46a8b1caa92540a80a4350e46370f995da455202f511a4fc6ddb23d8e4107/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e46a8b1caa92540a80a4350e46370f995da455202f511a4fc6ddb23d8e4107/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e46a8b1caa92540a80a4350e46370f995da455202f511a4fc6ddb23d8e4107/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:04 np0005604375 podman[86991]: 2026-02-01 14:51:04.373909458 +0000 UTC m=+0.111504164 container init 751c852b5ece59ea81e1fdc2a19e739eff9c738c284ce0a5ed502314ad0a4720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  1 09:51:04 np0005604375 podman[86991]: 2026-02-01 14:51:04.37906489 +0000 UTC m=+0.116659596 container start 751c852b5ece59ea81e1fdc2a19e739eff9c738c284ce0a5ed502314ad0a4720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  1 09:51:04 np0005604375 podman[86991]: 2026-02-01 14:51:04.283923263 +0000 UTC m=+0.021517959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:04 np0005604375 bash[86991]: 751c852b5ece59ea81e1fdc2a19e739eff9c738c284ce0a5ed502314ad0a4720
Feb  1 09:51:04 np0005604375 systemd[1]: Started Ceph osd.1 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: set uid:gid to 167:167 (ceph:ceph)
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: pidfile_write: ignore empty --pid-file
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) close
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) close
Feb  1 09:51:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) close
Feb  1 09:51:04 np0005604375 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  1 09:51:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) close
Feb  1 09:51:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Feb  1 09:51:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb  1 09:51:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:04 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Feb  1 09:51:04 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) close
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6400 /var/lib/ceph/osd/ceph-1/block) close
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) close
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: load: jerasure load: lrc 
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount shared_bdev_used = 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: RocksDB version: 7.9.2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Git sha 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: DB SUMMARY
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: DB Session ID:  WI5QOFCFHXU9QXVNGRAO
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: CURRENT file:  CURRENT
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: IDENTITY file:  IDENTITY
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                         Options.error_if_exists: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.create_if_missing: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                         Options.paranoid_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                                     Options.env: 0x55a03a947ea0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                                Options.info_log: 0x55a03b99a8a0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_file_opening_threads: 16
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                              Options.statistics: (nil)
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.use_fsync: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.max_log_file_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                         Options.allow_fallocate: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.use_direct_reads: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.create_missing_column_families: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                              Options.db_log_dir: 
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                                 Options.wal_dir: db.wal
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.advise_random_on_open: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.write_buffer_manager: 0x55a03a9acb40
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                            Options.rate_limiter: (nil)
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.unordered_write: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.row_cache: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                              Options.wal_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.allow_ingest_behind: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.two_write_queues: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.manual_wal_flush: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.wal_compression: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.atomic_flush: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.log_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.allow_data_in_errors: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.db_host_id: __hostname__
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.max_background_jobs: 4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.max_background_compactions: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.max_subcompactions: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.max_open_files: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.bytes_per_sync: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.max_background_flushes: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Compression algorithms supported:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kZSTD supported: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kXpressCompression supported: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kBZip2Compression supported: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kLZ4Compression supported: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kZlibCompression supported: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kSnappyCompression supported: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94ba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5318f7f2-9ea0-4f24-ab8e-6aafc2a90c2d
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464695574, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464696768, "job": 1, "event": "recovery_finished"}
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: freelist init
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: freelist _read_cfg
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs umount
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) close
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluefs mount shared_bdev_used = 27262976
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: RocksDB version: 7.9.2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Git sha 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: DB SUMMARY
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: DB Session ID:  WI5QOFCFHXU9QXVNGRAP
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: CURRENT file:  CURRENT
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: IDENTITY file:  IDENTITY
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                         Options.error_if_exists: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.create_if_missing: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                         Options.paranoid_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                                     Options.env: 0x55a03b793dc0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                                Options.info_log: 0x55a03b99b340
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_file_opening_threads: 16
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                              Options.statistics: (nil)
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.use_fsync: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.max_log_file_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                         Options.allow_fallocate: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.use_direct_reads: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.create_missing_column_families: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                              Options.db_log_dir: 
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                                 Options.wal_dir: db.wal
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.advise_random_on_open: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.write_buffer_manager: 0x55a03a9ad900
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                            Options.rate_limiter: (nil)
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.unordered_write: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.row_cache: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                              Options.wal_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.allow_ingest_behind: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.two_write_queues: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.manual_wal_flush: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.wal_compression: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.atomic_flush: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.log_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.allow_data_in_errors: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.db_host_id: __hostname__
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.max_background_jobs: 4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.max_background_compactions: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.max_subcompactions: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.max_open_files: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.bytes_per_sync: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.max_background_flushes: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Compression algorithms supported:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kZSTD supported: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kXpressCompression supported: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kBZip2Compression supported: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kLZ4Compression supported: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kZlibCompression supported: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: #011kSnappyCompression supported: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7800)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7800)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7800)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a03a94b4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5318f7f2-9ea0-4f24-ab8e-6aafc2a90c2d
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464747951, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464765329, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957464, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5318f7f2-9ea0-4f24-ab8e-6aafc2a90c2d", "db_session_id": "WI5QOFCFHXU9QXVNGRAP", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464785532, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957464, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5318f7f2-9ea0-4f24-ab8e-6aafc2a90c2d", "db_session_id": "WI5QOFCFHXU9QXVNGRAP", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464788132, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957464, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5318f7f2-9ea0-4f24-ab8e-6aafc2a90c2d", "db_session_id": "WI5QOFCFHXU9QXVNGRAP", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464789445, "job": 1, "event": "recovery_finished"}
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a03bbb3c00
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: DB pointer 0x55a03bb54000
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 460.80 MB usag
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: _get_class not permitted to load lua
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: _get_class not permitted to load sdk
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: osd.1 0 load_pgs
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: osd.1 0 load_pgs opened 0 pgs
Feb  1 09:51:04 np0005604375 ceph-osd[87011]: osd.1 0 log_to_monitors true
Feb  1 09:51:04 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1[87007]: 2026-02-01T14:51:04.887+0000 7fe8ab9508c0 -1 osd.1 0 log_to_monitors true
Feb  1 09:51:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Feb  1 09:51:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb  1 09:51:05 np0005604375 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1631172060; not ready for session (expect reconnect)
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  1 09:51:05 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  1 09:51:05 np0005604375 podman[87550]: 2026-02-01 14:51:05.046112405 +0000 UTC m=+0.047228320 container create 829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb  1 09:51:05 np0005604375 systemd[1]: Started libpod-conmon-829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae.scope.
Feb  1 09:51:05 np0005604375 podman[87550]: 2026-02-01 14:51:05.017108476 +0000 UTC m=+0.018224411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:05 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:05 np0005604375 podman[87550]: 2026-02-01 14:51:05.136265685 +0000 UTC m=+0.137381630 container init 829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_darwin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  1 09:51:05 np0005604375 podman[87550]: 2026-02-01 14:51:05.145053135 +0000 UTC m=+0.146169050 container start 829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_darwin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:05 np0005604375 strange_darwin[87566]: 167 167
Feb  1 09:51:05 np0005604375 systemd[1]: libpod-829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae.scope: Deactivated successfully.
Feb  1 09:51:05 np0005604375 podman[87550]: 2026-02-01 14:51:05.152077693 +0000 UTC m=+0.153193608 container attach 829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_darwin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:05 np0005604375 podman[87550]: 2026-02-01 14:51:05.152792184 +0000 UTC m=+0.153908099 container died 829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:05 np0005604375 systemd[1]: var-lib-containers-storage-overlay-7b005c584dc881fb6c10e5cc491b24acdc121745cfea47196a56a07c21d9d1ed-merged.mount: Deactivated successfully.
Feb  1 09:51:05 np0005604375 podman[87550]: 2026-02-01 14:51:05.204196846 +0000 UTC m=+0.205312761 container remove 829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_darwin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:05 np0005604375 systemd[1]: libpod-conmon-829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae.scope: Deactivated successfully.
Feb  1 09:51:05 np0005604375 ceph-osd[85969]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 48.513 iops: 12419.418 elapsed_sec: 0.242
Feb  1 09:51:05 np0005604375 ceph-osd[85969]: log_channel(cluster) log [WRN] : OSD bench result of 12419.417952 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  1 09:51:05 np0005604375 ceph-osd[85969]: osd.0 0 waiting for initial osdmap
Feb  1 09:51:05 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0[85965]: 2026-02-01T14:51:05.388+0000 7f96e0db4640 -1 osd.0 0 waiting for initial osdmap
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:51:05 np0005604375 ceph-osd[85969]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Feb  1 09:51:05 np0005604375 ceph-osd[85969]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Feb  1 09:51:05 np0005604375 ceph-osd[85969]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Feb  1 09:51:05 np0005604375 ceph-osd[85969]: osd.0 8 check_osdmap_features require_osd_release unknown -> tentacle
Feb  1 09:51:05 np0005604375 ceph-osd[85969]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  1 09:51:05 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0[85965]: 2026-02-01T14:51:05.418+0000 7f96db3a7640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  1 09:51:05 np0005604375 ceph-osd[85969]: osd.0 8 set_numa_affinity not setting numa affinity
Feb  1 09:51:05 np0005604375 ceph-osd[85969]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Feb  1 09:51:05 np0005604375 podman[87595]: 2026-02-01 14:51:05.442538955 +0000 UTC m=+0.032929126 container create 9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Feb  1 09:51:05 np0005604375 systemd[1]: Started libpod-conmon-9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c.scope.
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060] boot
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Feb  1 09:51:05 np0005604375 ceph-osd[85969]: osd.0 9 state: booting -> active
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:05 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  1 09:51:05 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:51:05 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88db3ac0bb7f95248f5e3dbc4c69f22195e2b620d7266112cfa7c0a2572160db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88db3ac0bb7f95248f5e3dbc4c69f22195e2b620d7266112cfa7c0a2572160db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88db3ac0bb7f95248f5e3dbc4c69f22195e2b620d7266112cfa7c0a2572160db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88db3ac0bb7f95248f5e3dbc4c69f22195e2b620d7266112cfa7c0a2572160db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88db3ac0bb7f95248f5e3dbc4c69f22195e2b620d7266112cfa7c0a2572160db/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:05 np0005604375 podman[87595]: 2026-02-01 14:51:05.428092367 +0000 UTC m=+0.018482558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:05 np0005604375 podman[87595]: 2026-02-01 14:51:05.53522153 +0000 UTC m=+0.125611771 container init 9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:05 np0005604375 podman[87595]: 2026-02-01 14:51:05.542919958 +0000 UTC m=+0.133310159 container start 9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:05 np0005604375 podman[87595]: 2026-02-01 14:51:05.54839093 +0000 UTC m=+0.138781271 container attach 9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  1 09:51:05 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test[87612]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb  1 09:51:05 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test[87612]:                            [--no-systemd] [--no-tmpfs]
Feb  1 09:51:05 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test[87612]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb  1 09:51:05 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  1 09:51:05 np0005604375 systemd[1]: libpod-9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c.scope: Deactivated successfully.
Feb  1 09:51:05 np0005604375 podman[87595]: 2026-02-01 14:51:05.698737402 +0000 UTC m=+0.289127613 container died 9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:05 np0005604375 systemd[1]: var-lib-containers-storage-overlay-88db3ac0bb7f95248f5e3dbc4c69f22195e2b620d7266112cfa7c0a2572160db-merged.mount: Deactivated successfully.
Feb  1 09:51:05 np0005604375 podman[87595]: 2026-02-01 14:51:05.745827927 +0000 UTC m=+0.336218108 container remove 9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  1 09:51:05 np0005604375 systemd[1]: libpod-conmon-9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c.scope: Deactivated successfully.
Feb  1 09:51:05 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb  1 09:51:05 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb  1 09:51:05 np0005604375 systemd[1]: Reloading.
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: Deploying daemon osd.2 on compute-0
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060] boot
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  1 09:51:06 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:51:06 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:51:06 np0005604375 systemd[1]: Reloading.
Feb  1 09:51:06 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:51:06 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:51:06 np0005604375 ceph-mgr[75469]: [devicehealth INFO root] creating mgr pool
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Feb  1 09:51:06 np0005604375 ceph-osd[87011]: osd.1 0 done with init, starting boot process
Feb  1 09:51:06 np0005604375 ceph-osd[87011]: osd.1 0 start_boot
Feb  1 09:51:06 np0005604375 ceph-osd[87011]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb  1 09:51:06 np0005604375 systemd[1]: Starting Ceph osd.2 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb  1 09:51:06 np0005604375 ceph-osd[87011]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb  1 09:51:06 np0005604375 ceph-osd[87011]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb  1 09:51:06 np0005604375 ceph-osd[87011]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb  1 09:51:06 np0005604375 ceph-osd[87011]: osd.1 0  bench count 12288000 bsize 4 KiB
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Feb  1 09:51:06 np0005604375 ceph-osd[85969]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb  1 09:51:06 np0005604375 ceph-osd[85969]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Feb  1 09:51:06 np0005604375 ceph-osd[85969]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  1 09:51:06 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  1 09:51:06 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb  1 09:51:06 np0005604375 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/133289609; not ready for session (expect reconnect)
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  1 09:51:06 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  1 09:51:06 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  1 09:51:06 np0005604375 podman[87779]: 2026-02-01 14:51:06.711833484 +0000 UTC m=+0.048943491 container create 4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:06 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9df92f6f6487e36703c2ce89c02594e667ddf90efbb64189158c5289f08fbe5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9df92f6f6487e36703c2ce89c02594e667ddf90efbb64189158c5289f08fbe5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9df92f6f6487e36703c2ce89c02594e667ddf90efbb64189158c5289f08fbe5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9df92f6f6487e36703c2ce89c02594e667ddf90efbb64189158c5289f08fbe5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9df92f6f6487e36703c2ce89c02594e667ddf90efbb64189158c5289f08fbe5/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:06 np0005604375 podman[87779]: 2026-02-01 14:51:06.69448726 +0000 UTC m=+0.031597287 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:06 np0005604375 podman[87779]: 2026-02-01 14:51:06.80591892 +0000 UTC m=+0.143028997 container init 4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  1 09:51:06 np0005604375 podman[87779]: 2026-02-01 14:51:06.817803322 +0000 UTC m=+0.154913359 container start 4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:06 np0005604375 podman[87779]: 2026-02-01 14:51:06.824789709 +0000 UTC m=+0.161899756 container attach 4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:06 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:06 np0005604375 bash[87779]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:06 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:06 np0005604375 bash[87779]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:07 np0005604375 ceph-mon[75179]: OSD bench result of 12419.417952 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  1 09:51:07 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb  1 09:51:07 np0005604375 ceph-mon[75179]: from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  1 09:51:07 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb  1 09:51:07 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb  1 09:51:07 np0005604375 lvm[87878]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:51:07 np0005604375 lvm[87878]: VG ceph_vg0 finished
Feb  1 09:51:07 np0005604375 lvm[87881]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:51:07 np0005604375 lvm[87881]: VG ceph_vg1 finished
Feb  1 09:51:07 np0005604375 lvm[87883]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:51:07 np0005604375 lvm[87883]: VG ceph_vg2 finished
Feb  1 09:51:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Feb  1 09:51:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb  1 09:51:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Feb  1 09:51:07 np0005604375 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/133289609; not ready for session (expect reconnect)
Feb  1 09:51:07 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Feb  1 09:51:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  1 09:51:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  1 09:51:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:07 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  1 09:51:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:07 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:51:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  1 09:51:07 np0005604375 bash[87779]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  1 09:51:07 np0005604375 bash[87779]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:07 np0005604375 bash[87779]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  1 09:51:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  1 09:51:07 np0005604375 bash[87779]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  1 09:51:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb  1 09:51:07 np0005604375 bash[87779]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb  1 09:51:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:07 np0005604375 bash[87779]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:07 np0005604375 bash[87779]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:07 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v27: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb  1 09:51:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb  1 09:51:07 np0005604375 bash[87779]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb  1 09:51:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  1 09:51:07 np0005604375 bash[87779]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  1 09:51:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: --> ceph-volume lvm activate successful for osd ID: 2
Feb  1 09:51:07 np0005604375 bash[87779]: --> ceph-volume lvm activate successful for osd ID: 2
Feb  1 09:51:07 np0005604375 systemd[1]: libpod-4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3.scope: Deactivated successfully.
Feb  1 09:51:07 np0005604375 systemd[1]: libpod-4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3.scope: Consumed 1.116s CPU time.
Feb  1 09:51:07 np0005604375 podman[87779]: 2026-02-01 14:51:07.726638537 +0000 UTC m=+1.063748534 container died 4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  1 09:51:07 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a9df92f6f6487e36703c2ce89c02594e667ddf90efbb64189158c5289f08fbe5-merged.mount: Deactivated successfully.
Feb  1 09:51:07 np0005604375 podman[87779]: 2026-02-01 14:51:07.814522629 +0000 UTC m=+1.151632656 container remove 4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  1 09:51:07 np0005604375 podman[88047]: 2026-02-01 14:51:07.989391088 +0000 UTC m=+0.045272762 container create e57f55d1e39c1800879b1c703e9a2465a3d5f5b53936bd9f1a62980cd9b1c29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  1 09:51:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8a877f0c77494b5f89776eca99807ce841f3392a79c0f16957d756f8571a0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8a877f0c77494b5f89776eca99807ce841f3392a79c0f16957d756f8571a0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8a877f0c77494b5f89776eca99807ce841f3392a79c0f16957d756f8571a0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8a877f0c77494b5f89776eca99807ce841f3392a79c0f16957d756f8571a0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8a877f0c77494b5f89776eca99807ce841f3392a79c0f16957d756f8571a0d/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:08 np0005604375 podman[88047]: 2026-02-01 14:51:08.051478017 +0000 UTC m=+0.107359661 container init e57f55d1e39c1800879b1c703e9a2465a3d5f5b53936bd9f1a62980cd9b1c29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:08 np0005604375 podman[88047]: 2026-02-01 14:51:08.055091414 +0000 UTC m=+0.110973058 container start e57f55d1e39c1800879b1c703e9a2465a3d5f5b53936bd9f1a62980cd9b1c29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:08 np0005604375 podman[88047]: 2026-02-01 14:51:07.963046198 +0000 UTC m=+0.018927862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:08 np0005604375 bash[88047]: e57f55d1e39c1800879b1c703e9a2465a3d5f5b53936bd9f1a62980cd9b1c29d
Feb  1 09:51:08 np0005604375 systemd[1]: Started Ceph osd.2 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: set uid:gid to 167:167 (ceph:ceph)
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: pidfile_write: ignore empty --pid-file
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) close
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) close
Feb  1 09:51:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) close
Feb  1 09:51:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) close
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) close
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a400 /var/lib/ceph/osd/ceph-2/block) close
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) close
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: load: jerasure load: lrc 
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) close
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) close
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) close
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) close
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) close
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount shared_bdev_used = 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: RocksDB version: 7.9.2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Git sha 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: DB SUMMARY
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: DB Session ID:  FEEPM6SA8484YKKK65Q9
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: CURRENT file:  CURRENT
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: IDENTITY file:  IDENTITY
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                         Options.error_if_exists: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.create_if_missing: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                         Options.paranoid_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                                     Options.env: 0x560d7e70bf80
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                                Options.info_log: 0x560d7f7668a0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_file_opening_threads: 16
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                              Options.statistics: (nil)
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.use_fsync: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.max_log_file_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                         Options.allow_fallocate: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.use_direct_reads: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.create_missing_column_families: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                              Options.db_log_dir: 
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                                 Options.wal_dir: db.wal
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.advise_random_on_open: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.write_buffer_manager: 0x560d7f60ab40
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                            Options.rate_limiter: (nil)
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.unordered_write: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.row_cache: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                              Options.wal_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.allow_ingest_behind: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.two_write_queues: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.manual_wal_flush: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.wal_compression: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.atomic_flush: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.log_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.allow_data_in_errors: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.db_host_id: __hostname__
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.max_background_jobs: 4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.max_background_compactions: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.max_subcompactions: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.max_open_files: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.bytes_per_sync: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.max_background_flushes: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Compression algorithms supported:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kZSTD supported: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kXpressCompression supported: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kBZip2Compression supported: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kLZ4Compression supported: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kZlibCompression supported: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kSnappyCompression supported: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 741a03b1-6978-4571-936f-6d904f940f62
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468418481, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468419670, "job": 1, "event": "recovery_finished"}
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: freelist init
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: freelist _read_cfg
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs umount
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) close
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluefs mount shared_bdev_used = 27262976
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: RocksDB version: 7.9.2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Git sha 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: DB SUMMARY
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: DB Session ID:  FEEPM6SA8484YKKK65Q8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: CURRENT file:  CURRENT
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: IDENTITY file:  IDENTITY
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                         Options.error_if_exists: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.create_if_missing: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                         Options.paranoid_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                                     Options.env: 0x560d7f936a80
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                                Options.info_log: 0x560d7f766a20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_file_opening_threads: 16
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                              Options.statistics: (nil)
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.use_fsync: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.max_log_file_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                         Options.allow_fallocate: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.use_direct_reads: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.create_missing_column_families: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                              Options.db_log_dir: 
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                                 Options.wal_dir: db.wal
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.advise_random_on_open: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.write_buffer_manager: 0x560d7f60b900
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                            Options.rate_limiter: (nil)
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.unordered_write: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.row_cache: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                              Options.wal_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.allow_ingest_behind: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.two_write_queues: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.manual_wal_flush: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.wal_compression: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.atomic_flush: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.log_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.allow_data_in_errors: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.db_host_id: __hostname__
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.max_background_jobs: 4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.max_background_compactions: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.max_subcompactions: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.max_open_files: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.bytes_per_sync: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.max_background_flushes: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Compression algorithms supported:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kZSTD supported: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kXpressCompression supported: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kBZip2Compression supported: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kLZ4Compression supported: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kZlibCompression supported: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: #011kSnappyCompression supported: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70f8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f7670c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f7670c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f7670c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560d7e70fa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 741a03b1-6978-4571-936f-6d904f940f62
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468459551, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468466748, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957468, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "741a03b1-6978-4571-936f-6d904f940f62", "db_session_id": "FEEPM6SA8484YKKK65Q8", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468481257, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957468, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "741a03b1-6978-4571-936f-6d904f940f62", "db_session_id": "FEEPM6SA8484YKKK65Q8", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468484524, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957468, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "741a03b1-6978-4571-936f-6d904f940f62", "db_session_id": "FEEPM6SA8484YKKK65Q8", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468498387, "job": 1, "event": "recovery_finished"}
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb  1 09:51:08 np0005604375 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/133289609; not ready for session (expect reconnect)
Feb  1 09:51:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  1 09:51:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  1 09:51:08 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  1 09:51:08 np0005604375 podman[88543]: 2026-02-01 14:51:08.525420972 +0000 UTC m=+0.036816281 container create 0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb  1 09:51:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:08 np0005604375 systemd[1]: Started libpod-conmon-0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a.scope.
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560d7f94a000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: DB pointer 0x560d7f920000
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 460.80 MB usag
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: _get_class not permitted to load lua
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: _get_class not permitted to load sdk
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: osd.2 0 load_pgs
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: osd.2 0 load_pgs opened 0 pgs
Feb  1 09:51:08 np0005604375 ceph-osd[88066]: osd.2 0 log_to_monitors true
Feb  1 09:51:08 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2[88062]: 2026-02-01T14:51:08.581+0000 7fd7be31a8c0 -1 osd.2 0 log_to_monitors true
Feb  1 09:51:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Feb  1 09:51:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb  1 09:51:08 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:08 np0005604375 podman[88543]: 2026-02-01 14:51:08.505978187 +0000 UTC m=+0.017373526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:08 np0005604375 podman[88543]: 2026-02-01 14:51:08.611661966 +0000 UTC m=+0.123057285 container init 0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  1 09:51:08 np0005604375 podman[88543]: 2026-02-01 14:51:08.616410787 +0000 UTC m=+0.127806106 container start 0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Feb  1 09:51:08 np0005604375 podman[88543]: 2026-02-01 14:51:08.619459337 +0000 UTC m=+0.130854656 container attach 0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lamarr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:08 np0005604375 zen_lamarr[88562]: 167 167
Feb  1 09:51:08 np0005604375 systemd[1]: libpod-0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a.scope: Deactivated successfully.
Feb  1 09:51:08 np0005604375 podman[88543]: 2026-02-01 14:51:08.620949401 +0000 UTC m=+0.132344720 container died 0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lamarr, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:08 np0005604375 systemd[1]: var-lib-containers-storage-overlay-49c6ea1f772a83027904f5048c4561792fc5b1bf0663ea00d49c7d9b918dba3f-merged.mount: Deactivated successfully.
Feb  1 09:51:08 np0005604375 podman[88543]: 2026-02-01 14:51:08.677378853 +0000 UTC m=+0.188774172 container remove 0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:08 np0005604375 systemd[1]: libpod-conmon-0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a.scope: Deactivated successfully.
Feb  1 09:51:08 np0005604375 podman[88618]: 2026-02-01 14:51:08.774645873 +0000 UTC m=+0.034938995 container create 4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_meninsky, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  1 09:51:08 np0005604375 systemd[1]: Started libpod-conmon-4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01.scope.
Feb  1 09:51:08 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9502b20db9edab9e810f609d44412e9fa90dfba97f2997b2376c2899320d24f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9502b20db9edab9e810f609d44412e9fa90dfba97f2997b2376c2899320d24f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9502b20db9edab9e810f609d44412e9fa90dfba97f2997b2376c2899320d24f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9502b20db9edab9e810f609d44412e9fa90dfba97f2997b2376c2899320d24f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:08 np0005604375 podman[88618]: 2026-02-01 14:51:08.851216631 +0000 UTC m=+0.111509743 container init 4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_meninsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  1 09:51:08 np0005604375 ceph-osd[87011]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 47.525 iops: 12166.306 elapsed_sec: 0.247
Feb  1 09:51:08 np0005604375 ceph-osd[87011]: log_channel(cluster) log [WRN] : OSD bench result of 12166.306450 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  1 09:51:08 np0005604375 ceph-osd[87011]: osd.1 0 waiting for initial osdmap
Feb  1 09:51:08 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1[87007]: 2026-02-01T14:51:08.851+0000 7fe8a80e4640 -1 osd.1 0 waiting for initial osdmap
Feb  1 09:51:08 np0005604375 podman[88618]: 2026-02-01 14:51:08.758556157 +0000 UTC m=+0.018849299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:08 np0005604375 podman[88618]: 2026-02-01 14:51:08.855948411 +0000 UTC m=+0.116241523 container start 4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_meninsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  1 09:51:08 np0005604375 ceph-osd[87011]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb  1 09:51:08 np0005604375 ceph-osd[87011]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb  1 09:51:08 np0005604375 ceph-osd[87011]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb  1 09:51:08 np0005604375 ceph-osd[87011]: osd.1 11 check_osdmap_features require_osd_release unknown -> tentacle
Feb  1 09:51:08 np0005604375 podman[88618]: 2026-02-01 14:51:08.860516086 +0000 UTC m=+0.120809198 container attach 4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:08 np0005604375 ceph-osd[87011]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  1 09:51:08 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1[87007]: 2026-02-01T14:51:08.874+0000 7fe8a26d7640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  1 09:51:08 np0005604375 ceph-osd[87011]: osd.1 11 set_numa_affinity not setting numa affinity
Feb  1 09:51:08 np0005604375 ceph-osd[87011]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Feb  1 09:51:09 np0005604375 lvm[88709]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:51:09 np0005604375 lvm[88709]: VG ceph_vg0 finished
Feb  1 09:51:09 np0005604375 lvm[88711]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:51:09 np0005604375 lvm[88711]: VG ceph_vg1 finished
Feb  1 09:51:09 np0005604375 lvm[88712]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:51:09 np0005604375 lvm[88712]: VG ceph_vg2 finished
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  1 09:51:09 np0005604375 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/133289609; not ready for session (expect reconnect)
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  1 09:51:09 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb  1 09:51:09 np0005604375 frosty_meninsky[88634]: {}
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609] boot
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Feb  1 09:51:09 np0005604375 ceph-osd[87011]: osd.1 12 state: booting -> active
Feb  1 09:51:09 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:09 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: OSD bench result of 12166.306450 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  1 09:51:09 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb  1 09:51:09 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb  1 09:51:09 np0005604375 systemd[1]: libpod-4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01.scope: Deactivated successfully.
Feb  1 09:51:09 np0005604375 podman[88618]: 2026-02-01 14:51:09.587927538 +0000 UTC m=+0.848220660 container died 4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:09 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a9502b20db9edab9e810f609d44412e9fa90dfba97f2997b2376c2899320d24f-merged.mount: Deactivated successfully.
Feb  1 09:51:09 np0005604375 podman[88618]: 2026-02-01 14:51:09.628255783 +0000 UTC m=+0.888548895 container remove 4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:09 np0005604375 systemd[1]: libpod-conmon-4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01.scope: Deactivated successfully.
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:09 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v29: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb  1 09:51:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:10 np0005604375 podman[88846]: 2026-02-01 14:51:10.267517663 +0000 UTC m=+0.063913664 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  1 09:51:10 np0005604375 podman[88846]: 2026-02-01 14:51:10.377620224 +0000 UTC m=+0.174016225 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Feb  1 09:51:10 np0005604375 ceph-osd[88066]: osd.2 0 done with init, starting boot process
Feb  1 09:51:10 np0005604375 ceph-osd[88066]: osd.2 0 start_boot
Feb  1 09:51:10 np0005604375 ceph-osd[88066]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb  1 09:51:10 np0005604375 ceph-osd[88066]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb  1 09:51:10 np0005604375 ceph-osd[88066]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb  1 09:51:10 np0005604375 ceph-osd[88066]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb  1 09:51:10 np0005604375 ceph-osd[88066]: osd.2 0  bench count 12288000 bsize 4 KiB
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:10 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:51:10 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:51:10 np0005604375 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3752563045; not ready for session (expect reconnect)
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:10 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609] boot
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  1 09:51:10 np0005604375 ceph-mgr[75469]: [devicehealth INFO root] creating main.db for devicehealth
Feb  1 09:51:10 np0005604375 ceph-mgr[75469]: [devicehealth INFO root] Check health
Feb  1 09:51:10 np0005604375 ceph-mgr[75469]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:11 np0005604375 podman[89072]: 2026-02-01 14:51:11.382875654 +0000 UTC m=+0.073773666 container create 8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_payne, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 09:51:11 np0005604375 systemd[1]: Started libpod-conmon-8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5.scope.
Feb  1 09:51:11 np0005604375 podman[89072]: 2026-02-01 14:51:11.3520034 +0000 UTC m=+0.042901402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:11 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Feb  1 09:51:11 np0005604375 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3752563045; not ready for session (expect reconnect)
Feb  1 09:51:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:11 np0005604375 podman[89072]: 2026-02-01 14:51:11.693694759 +0000 UTC m=+0.384592791 container init 8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_payne, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:11 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:51:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Feb  1 09:51:11 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v31: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb  1 09:51:11 np0005604375 podman[89072]: 2026-02-01 14:51:11.703739277 +0000 UTC m=+0.394637289 container start 8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_payne, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 09:51:11 np0005604375 loving_payne[89089]: 167 167
Feb  1 09:51:11 np0005604375 systemd[1]: libpod-8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5.scope: Deactivated successfully.
Feb  1 09:51:11 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Feb  1 09:51:11 np0005604375 podman[89072]: 2026-02-01 14:51:11.718059891 +0000 UTC m=+0.408957983 container attach 8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_payne, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:11 np0005604375 podman[89072]: 2026-02-01 14:51:11.718601327 +0000 UTC m=+0.409499339 container died 8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_payne, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  1 09:51:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:11 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:51:11 np0005604375 ceph-mon[75179]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb  1 09:51:11 np0005604375 ceph-mon[75179]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb  1 09:51:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:11 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a6370ca7a6c29c748457577c23f9874162631477c8c0f6fc672ed44e36a59898-merged.mount: Deactivated successfully.
Feb  1 09:51:11 np0005604375 podman[89072]: 2026-02-01 14:51:11.817698332 +0000 UTC m=+0.508596314 container remove 8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:11 np0005604375 systemd[1]: libpod-conmon-8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5.scope: Deactivated successfully.
Feb  1 09:51:11 np0005604375 podman[89114]: 2026-02-01 14:51:11.975625409 +0000 UTC m=+0.056273728 container create 7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.viosrg(active, since 54s)
Feb  1 09:51:12 np0005604375 podman[89114]: 2026-02-01 14:51:11.942824037 +0000 UTC m=+0.023472426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:12 np0005604375 systemd[1]: Started libpod-conmon-7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83.scope.
Feb  1 09:51:12 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc20352ad425d010d0064e19a549d5a133d62041d25e7772df82ca56047c0fa8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc20352ad425d010d0064e19a549d5a133d62041d25e7772df82ca56047c0fa8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc20352ad425d010d0064e19a549d5a133d62041d25e7772df82ca56047c0fa8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc20352ad425d010d0064e19a549d5a133d62041d25e7772df82ca56047c0fa8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:12 np0005604375 podman[89114]: 2026-02-01 14:51:12.120135138 +0000 UTC m=+0.200783457 container init 7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_chandrasekhar, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  1 09:51:12 np0005604375 podman[89114]: 2026-02-01 14:51:12.129883577 +0000 UTC m=+0.210531896 container start 7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_chandrasekhar, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 09:51:12 np0005604375 podman[89114]: 2026-02-01 14:51:12.146988824 +0000 UTC m=+0.227637183 container attach 7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:12 np0005604375 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3752563045; not ready for session (expect reconnect)
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:12 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]: [
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:    {
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:        "available": false,
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:        "being_replaced": false,
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:        "ceph_device_lvm": false,
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:        "lsm_data": {},
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:        "lvs": [],
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:        "path": "/dev/sr0",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:        "rejected_reasons": [
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "Insufficient space (<5GB)",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "Has a FileSystem"
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:        ],
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:        "sys_api": {
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "actuators": null,
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "device_nodes": [
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:                "sr0"
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            ],
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "devname": "sr0",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "human_readable_size": "482.00 KB",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "id_bus": "ata",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "model": "QEMU DVD-ROM",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "nr_requests": "2",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "parent": "/dev/sr0",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "partitions": {},
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "path": "/dev/sr0",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "removable": "1",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "rev": "2.5+",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "ro": "0",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "rotational": "1",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "sas_address": "",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "sas_device_handle": "",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "scheduler_mode": "mq-deadline",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "sectors": 0,
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "sectorsize": "2048",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "size": 493568.0,
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "support_discard": "2048",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "type": "disk",
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:            "vendor": "QEMU"
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:        }
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]:    }
Feb  1 09:51:12 np0005604375 determined_chandrasekhar[89130]: ]
Feb  1 09:51:12 np0005604375 systemd[1]: libpod-7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83.scope: Deactivated successfully.
Feb  1 09:51:12 np0005604375 podman[89114]: 2026-02-01 14:51:12.713423238 +0000 UTC m=+0.794071587 container died 7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  1 09:51:12 np0005604375 systemd[1]: var-lib-containers-storage-overlay-fc20352ad425d010d0064e19a549d5a133d62041d25e7772df82ca56047c0fa8-merged.mount: Deactivated successfully.
Feb  1 09:51:12 np0005604375 podman[89114]: 2026-02-01 14:51:12.872670394 +0000 UTC m=+0.953318733 container remove 7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_chandrasekhar, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  1 09:51:12 np0005604375 systemd[1]: libpod-conmon-7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83.scope: Deactivated successfully.
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb  1 09:51:12 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43686k
Feb  1 09:51:12 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43686k
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Feb  1 09:51:12 np0005604375 ceph-mgr[75469]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Feb  1 09:51:12 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:51:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:13 np0005604375 podman[89995]: 2026-02-01 14:51:13.384917515 +0000 UTC m=+0.044418687 container create 0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:13 np0005604375 systemd[1]: Started libpod-conmon-0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5.scope.
Feb  1 09:51:13 np0005604375 podman[89995]: 2026-02-01 14:51:13.359154692 +0000 UTC m=+0.018655854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:13 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:13 np0005604375 podman[89995]: 2026-02-01 14:51:13.502417924 +0000 UTC m=+0.161919176 container init 0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:13 np0005604375 podman[89995]: 2026-02-01 14:51:13.510114182 +0000 UTC m=+0.169615364 container start 0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:13 np0005604375 wonderful_wiles[90011]: 167 167
Feb  1 09:51:13 np0005604375 systemd[1]: libpod-0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5.scope: Deactivated successfully.
Feb  1 09:51:13 np0005604375 podman[89995]: 2026-02-01 14:51:13.533278918 +0000 UTC m=+0.192780150 container attach 0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  1 09:51:13 np0005604375 podman[89995]: 2026-02-01 14:51:13.533792163 +0000 UTC m=+0.193293375 container died 0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  1 09:51:13 np0005604375 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3752563045; not ready for session (expect reconnect)
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:13 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:51:13 np0005604375 systemd[1]: var-lib-containers-storage-overlay-aed99cf3cf07a4bb6817de6419fac5cf48aee237e69325b624ff8968be215f61-merged.mount: Deactivated successfully.
Feb  1 09:51:13 np0005604375 podman[89995]: 2026-02-01 14:51:13.673586623 +0000 UTC m=+0.333087775 container remove 0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Feb  1 09:51:13 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v33: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb  1 09:51:13 np0005604375 systemd[1]: libpod-conmon-0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5.scope: Deactivated successfully.
Feb  1 09:51:13 np0005604375 podman[90036]: 2026-02-01 14:51:13.821850703 +0000 UTC m=+0.053818375 container create 2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_banach, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  1 09:51:13 np0005604375 systemd[1]: Started libpod-conmon-2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f.scope.
Feb  1 09:51:13 np0005604375 podman[90036]: 2026-02-01 14:51:13.792227276 +0000 UTC m=+0.024195038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:13 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25b5cb575aabc9ef8741d2f8179ac42767d16bdafe2ea67899d1d2dec66828c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25b5cb575aabc9ef8741d2f8179ac42767d16bdafe2ea67899d1d2dec66828c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25b5cb575aabc9ef8741d2f8179ac42767d16bdafe2ea67899d1d2dec66828c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25b5cb575aabc9ef8741d2f8179ac42767d16bdafe2ea67899d1d2dec66828c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25b5cb575aabc9ef8741d2f8179ac42767d16bdafe2ea67899d1d2dec66828c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:13 np0005604375 podman[90036]: 2026-02-01 14:51:13.911769026 +0000 UTC m=+0.143736748 container init 2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_banach, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb  1 09:51:13 np0005604375 podman[90036]: 2026-02-01 14:51:13.918157825 +0000 UTC m=+0.150125507 container start 2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_banach, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  1 09:51:13 np0005604375 podman[90036]: 2026-02-01 14:51:13.921993899 +0000 UTC m=+0.153961581 container attach 2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: Adjusting osd_memory_target on compute-0 to 43686k
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:51:14 np0005604375 ceph-osd[88066]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 36.731 iops: 9403.069 elapsed_sec: 0.319
Feb  1 09:51:14 np0005604375 ceph-osd[88066]: log_channel(cluster) log [WRN] : OSD bench result of 9403.069102 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  1 09:51:14 np0005604375 ceph-osd[88066]: osd.2 0 waiting for initial osdmap
Feb  1 09:51:14 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2[88062]: 2026-02-01T14:51:14.022+0000 7fd7baaae640 -1 osd.2 0 waiting for initial osdmap
Feb  1 09:51:14 np0005604375 ceph-osd[88066]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb  1 09:51:14 np0005604375 ceph-osd[88066]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb  1 09:51:14 np0005604375 ceph-osd[88066]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb  1 09:51:14 np0005604375 ceph-osd[88066]: osd.2 14 check_osdmap_features require_osd_release unknown -> tentacle
Feb  1 09:51:14 np0005604375 ceph-osd[88066]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  1 09:51:14 np0005604375 ceph-osd[88066]: osd.2 14 set_numa_affinity not setting numa affinity
Feb  1 09:51:14 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2[88062]: 2026-02-01T14:51:14.042+0000 7fd7b50a1640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  1 09:51:14 np0005604375 ceph-osd[88066]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Feb  1 09:51:14 np0005604375 boring_banach[90052]: --> passed data devices: 0 physical, 3 LVM
Feb  1 09:51:14 np0005604375 boring_banach[90052]: --> All data devices are unavailable
Feb  1 09:51:14 np0005604375 systemd[1]: libpod-2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f.scope: Deactivated successfully.
Feb  1 09:51:14 np0005604375 podman[90036]: 2026-02-01 14:51:14.329279691 +0000 UTC m=+0.561247363 container died 2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_banach, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  1 09:51:14 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c25b5cb575aabc9ef8741d2f8179ac42767d16bdafe2ea67899d1d2dec66828c-merged.mount: Deactivated successfully.
Feb  1 09:51:14 np0005604375 podman[90036]: 2026-02-01 14:51:14.368464631 +0000 UTC m=+0.600432303 container remove 2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_banach, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:14 np0005604375 systemd[1]: libpod-conmon-2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f.scope: Deactivated successfully.
Feb  1 09:51:14 np0005604375 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3752563045; not ready for session (expect reconnect)
Feb  1 09:51:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:14 np0005604375 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  1 09:51:14 np0005604375 podman[90144]: 2026-02-01 14:51:14.744373894 +0000 UTC m=+0.045093577 container create 04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_blackwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:14 np0005604375 systemd[1]: Started libpod-conmon-04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a.scope.
Feb  1 09:51:14 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:14 np0005604375 podman[90144]: 2026-02-01 14:51:14.80941531 +0000 UTC m=+0.110135073 container init 04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_blackwell, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  1 09:51:14 np0005604375 podman[90144]: 2026-02-01 14:51:14.716090246 +0000 UTC m=+0.016809939 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:14 np0005604375 podman[90144]: 2026-02-01 14:51:14.816727446 +0000 UTC m=+0.117447139 container start 04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:14 np0005604375 eloquent_blackwell[90160]: 167 167
Feb  1 09:51:14 np0005604375 podman[90144]: 2026-02-01 14:51:14.820838458 +0000 UTC m=+0.121558161 container attach 04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_blackwell, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  1 09:51:14 np0005604375 systemd[1]: libpod-04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a.scope: Deactivated successfully.
Feb  1 09:51:14 np0005604375 podman[90144]: 2026-02-01 14:51:14.82294083 +0000 UTC m=+0.123660493 container died 04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:14 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c238ac3520da29391a0c26a6fa92b4a980f23eed82ec594634cf18cd01b79a13-merged.mount: Deactivated successfully.
Feb  1 09:51:14 np0005604375 podman[90144]: 2026-02-01 14:51:14.868115368 +0000 UTC m=+0.168835031 container remove 04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:14 np0005604375 systemd[1]: libpod-conmon-04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a.scope: Deactivated successfully.
Feb  1 09:51:15 np0005604375 podman[90183]: 2026-02-01 14:51:15.012921716 +0000 UTC m=+0.053748172 container create 360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  1 09:51:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Feb  1 09:51:15 np0005604375 ceph-mon[75179]: OSD bench result of 9403.069102 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  1 09:51:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Feb  1 09:51:15 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045] boot
Feb  1 09:51:15 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Feb  1 09:51:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  1 09:51:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  1 09:51:15 np0005604375 ceph-osd[88066]: osd.2 15 state: booting -> active
Feb  1 09:51:15 np0005604375 systemd[1]: Started libpod-conmon-360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac.scope.
Feb  1 09:51:15 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a175da107c4b889f71a213f598e4cd0e65e48516b81a9ac9e54d7d6081c51537/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a175da107c4b889f71a213f598e4cd0e65e48516b81a9ac9e54d7d6081c51537/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a175da107c4b889f71a213f598e4cd0e65e48516b81a9ac9e54d7d6081c51537/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a175da107c4b889f71a213f598e4cd0e65e48516b81a9ac9e54d7d6081c51537/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:15 np0005604375 podman[90183]: 2026-02-01 14:51:14.988129102 +0000 UTC m=+0.028955628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:15 np0005604375 podman[90183]: 2026-02-01 14:51:15.089807833 +0000 UTC m=+0.130634289 container init 360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:15 np0005604375 podman[90183]: 2026-02-01 14:51:15.094490692 +0000 UTC m=+0.135317118 container start 360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  1 09:51:15 np0005604375 podman[90183]: 2026-02-01 14:51:15.097341297 +0000 UTC m=+0.138167793 container attach 360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]: {
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:    "0": [
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:        {
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "devices": [
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "/dev/loop3"
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            ],
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_name": "ceph_lv0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_size": "21470642176",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "name": "ceph_lv0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "tags": {
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.crush_device_class": "",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.encrypted": "0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.osd_id": "0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.type": "block",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.vdo": "0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.with_tpm": "0"
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            },
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "type": "block",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "vg_name": "ceph_vg0"
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:        }
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:    ],
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:    "1": [
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:        {
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "devices": [
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "/dev/loop4"
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            ],
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_name": "ceph_lv1",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_size": "21470642176",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "name": "ceph_lv1",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "tags": {
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.crush_device_class": "",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.encrypted": "0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.osd_id": "1",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.type": "block",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.vdo": "0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.with_tpm": "0"
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            },
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "type": "block",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "vg_name": "ceph_vg1"
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:        }
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:    ],
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:    "2": [
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:        {
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "devices": [
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "/dev/loop5"
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            ],
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_name": "ceph_lv2",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_size": "21470642176",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "name": "ceph_lv2",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "tags": {
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.crush_device_class": "",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.encrypted": "0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.osd_id": "2",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.type": "block",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.vdo": "0",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:                "ceph.with_tpm": "0"
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            },
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "type": "block",
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:            "vg_name": "ceph_vg2"
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:        }
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]:    ]
Feb  1 09:51:15 np0005604375 cranky_albattani[90199]: }
Feb  1 09:51:15 np0005604375 systemd[1]: libpod-360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac.scope: Deactivated successfully.
Feb  1 09:51:15 np0005604375 podman[90183]: 2026-02-01 14:51:15.356980196 +0000 UTC m=+0.397806642 container died 360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  1 09:51:15 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a175da107c4b889f71a213f598e4cd0e65e48516b81a9ac9e54d7d6081c51537-merged.mount: Deactivated successfully.
Feb  1 09:51:15 np0005604375 podman[90183]: 2026-02-01 14:51:15.396849496 +0000 UTC m=+0.437675932 container remove 360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:51:15 np0005604375 systemd[1]: libpod-conmon-360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac.scope: Deactivated successfully.
Feb  1 09:51:15 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:15 np0005604375 podman[90280]: 2026-02-01 14:51:15.801042326 +0000 UTC m=+0.042188910 container create 53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:15 np0005604375 systemd[1]: Started libpod-conmon-53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188.scope.
Feb  1 09:51:15 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:15 np0005604375 podman[90280]: 2026-02-01 14:51:15.866434163 +0000 UTC m=+0.107580787 container init 53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_varahamihira, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:15 np0005604375 podman[90280]: 2026-02-01 14:51:15.870486573 +0000 UTC m=+0.111633157 container start 53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:15 np0005604375 podman[90280]: 2026-02-01 14:51:15.874515902 +0000 UTC m=+0.115662536 container attach 53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_varahamihira, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  1 09:51:15 np0005604375 affectionate_varahamihira[90296]: 167 167
Feb  1 09:51:15 np0005604375 podman[90280]: 2026-02-01 14:51:15.875526502 +0000 UTC m=+0.116673086 container died 53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_varahamihira, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 09:51:15 np0005604375 systemd[1]: libpod-53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188.scope: Deactivated successfully.
Feb  1 09:51:15 np0005604375 podman[90280]: 2026-02-01 14:51:15.785150456 +0000 UTC m=+0.026297070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:15 np0005604375 systemd[1]: var-lib-containers-storage-overlay-e104d42f97ed67a75a8a0a937e8b14849421d086e8e20ae31af9e4ce0cf3d080-merged.mount: Deactivated successfully.
Feb  1 09:51:15 np0005604375 podman[90280]: 2026-02-01 14:51:15.905419918 +0000 UTC m=+0.146566502 container remove 53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  1 09:51:15 np0005604375 systemd[1]: libpod-conmon-53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188.scope: Deactivated successfully.
Feb  1 09:51:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Feb  1 09:51:16 np0005604375 ceph-mon[75179]: osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045] boot
Feb  1 09:51:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Feb  1 09:51:16 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Feb  1 09:51:16 np0005604375 podman[90321]: 2026-02-01 14:51:16.072549817 +0000 UTC m=+0.083574076 container create 0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  1 09:51:16 np0005604375 systemd[1]: Started libpod-conmon-0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7.scope.
Feb  1 09:51:16 np0005604375 podman[90321]: 2026-02-01 14:51:16.045134285 +0000 UTC m=+0.056158614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:16 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:16 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24123834e24e2a0bc12963d54af1d4438c34fffbe57eeb753d92d24b0b114c82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:16 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24123834e24e2a0bc12963d54af1d4438c34fffbe57eeb753d92d24b0b114c82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:16 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24123834e24e2a0bc12963d54af1d4438c34fffbe57eeb753d92d24b0b114c82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:16 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24123834e24e2a0bc12963d54af1d4438c34fffbe57eeb753d92d24b0b114c82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:16 np0005604375 podman[90321]: 2026-02-01 14:51:16.169757666 +0000 UTC m=+0.180781975 container init 0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  1 09:51:16 np0005604375 podman[90321]: 2026-02-01 14:51:16.177026531 +0000 UTC m=+0.188050770 container start 0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  1 09:51:16 np0005604375 podman[90321]: 2026-02-01 14:51:16.180520614 +0000 UTC m=+0.191544933 container attach 0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  1 09:51:16 np0005604375 lvm[90414]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:51:16 np0005604375 lvm[90414]: VG ceph_vg0 finished
Feb  1 09:51:16 np0005604375 lvm[90417]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:51:16 np0005604375 lvm[90417]: VG ceph_vg1 finished
Feb  1 09:51:16 np0005604375 lvm[90419]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:51:16 np0005604375 lvm[90419]: VG ceph_vg2 finished
Feb  1 09:51:16 np0005604375 sweet_feistel[90338]: {}
Feb  1 09:51:16 np0005604375 systemd[1]: libpod-0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7.scope: Deactivated successfully.
Feb  1 09:51:16 np0005604375 systemd[1]: libpod-0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7.scope: Consumed 1.070s CPU time.
Feb  1 09:51:16 np0005604375 podman[90321]: 2026-02-01 14:51:16.970420507 +0000 UTC m=+0.981444796 container died 0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  1 09:51:17 np0005604375 systemd[1]: var-lib-containers-storage-overlay-24123834e24e2a0bc12963d54af1d4438c34fffbe57eeb753d92d24b0b114c82-merged.mount: Deactivated successfully.
Feb  1 09:51:17 np0005604375 podman[90321]: 2026-02-01 14:51:17.023931852 +0000 UTC m=+1.034956121 container remove 0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:17 np0005604375 systemd[1]: libpod-conmon-0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7.scope: Deactivated successfully.
Feb  1 09:51:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:17 np0005604375 python3[90448]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:17 np0005604375 podman[90484]: 2026-02-01 14:51:17.221265696 +0000 UTC m=+0.048553459 container create 7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8 (image=quay.io/ceph/ceph:v20, name=flamboyant_chaum, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:17 np0005604375 systemd[1]: Started libpod-conmon-7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8.scope.
Feb  1 09:51:17 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:17 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33cb4f1010330b34ad95f20bac142cab83a4c7f65d7c0a809b33d3b635f976e7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:17 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33cb4f1010330b34ad95f20bac142cab83a4c7f65d7c0a809b33d3b635f976e7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:17 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33cb4f1010330b34ad95f20bac142cab83a4c7f65d7c0a809b33d3b635f976e7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:17 np0005604375 podman[90484]: 2026-02-01 14:51:17.208538299 +0000 UTC m=+0.035826092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:17 np0005604375 podman[90484]: 2026-02-01 14:51:17.328766329 +0000 UTC m=+0.156054202 container init 7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8 (image=quay.io/ceph/ceph:v20, name=flamboyant_chaum, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:17 np0005604375 podman[90484]: 2026-02-01 14:51:17.338669542 +0000 UTC m=+0.165957345 container start 7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8 (image=quay.io/ceph/ceph:v20, name=flamboyant_chaum, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 09:51:17 np0005604375 podman[90484]: 2026-02-01 14:51:17.342597658 +0000 UTC m=+0.169885511 container attach 7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8 (image=quay.io/ceph/ceph:v20, name=flamboyant_chaum, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:17 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:51:17
Feb  1 09:51:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 09:51:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 09:51:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['.mgr']
Feb  1 09:51:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 09:51:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  1 09:51:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186673883' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb  1 09:51:17 np0005604375 flamboyant_chaum[90503]: 
Feb  1 09:51:17 np0005604375 flamboyant_chaum[90503]: {"fsid":"2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":77,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1769957475,"num_in_osds":3,"osd_in_since":1769957454,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502886400,"bytes_avail":63909040128,"bytes_total":64411926528},"fsmap":{"epoch":1,"btime":"2026-02-01T14:49:58:117399+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-01T14:49:58.120892+0000","services":{}},"progress_events":{}}
Feb  1 09:51:17 np0005604375 systemd[1]: libpod-7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8.scope: Deactivated successfully.
Feb  1 09:51:17 np0005604375 podman[90484]: 2026-02-01 14:51:17.818354318 +0000 UTC m=+0.645642091 container died 7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8 (image=quay.io/ceph/ceph:v20, name=flamboyant_chaum, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:17 np0005604375 systemd[1]: var-lib-containers-storage-overlay-33cb4f1010330b34ad95f20bac142cab83a4c7f65d7c0a809b33d3b635f976e7-merged.mount: Deactivated successfully.
Feb  1 09:51:17 np0005604375 podman[90484]: 2026-02-01 14:51:17.857668352 +0000 UTC m=+0.684956125 container remove 7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8 (image=quay.io/ceph/ceph:v20, name=flamboyant_chaum, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  1 09:51:17 np0005604375 systemd[1]: libpod-conmon-7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8.scope: Deactivated successfully.
Feb  1 09:51:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:18 np0005604375 python3[90565]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:18 np0005604375 podman[90566]: 2026-02-01 14:51:18.409288858 +0000 UTC m=+0.045400696 container create 4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae (image=quay.io/ceph/ceph:v20, name=stoic_pare, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 09:51:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:51:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 09:51:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:51:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:51:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 09:51:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 09:51:18 np0005604375 systemd[1]: Started libpod-conmon-4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae.scope.
Feb  1 09:51:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:51:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:51:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:51:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:51:18 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d5497146173086566ccd20a8843ed910f98eb6c3a38fac887723a0d5928e1b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d5497146173086566ccd20a8843ed910f98eb6c3a38fac887723a0d5928e1b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:18 np0005604375 podman[90566]: 2026-02-01 14:51:18.386447361 +0000 UTC m=+0.022559249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:18 np0005604375 podman[90566]: 2026-02-01 14:51:18.494630805 +0000 UTC m=+0.130742663 container init 4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae (image=quay.io/ceph/ceph:v20, name=stoic_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:18 np0005604375 podman[90566]: 2026-02-01 14:51:18.499028476 +0000 UTC m=+0.135140314 container start 4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae (image=quay.io/ceph/ceph:v20, name=stoic_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 09:51:18 np0005604375 podman[90566]: 2026-02-01 14:51:18.503248301 +0000 UTC m=+0.139360179 container attach 4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae (image=quay.io/ceph/ceph:v20, name=stoic_pare, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  1 09:51:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  1 09:51:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/304218935' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  1 09:51:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Feb  1 09:51:19 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/304218935' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  1 09:51:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/304218935' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  1 09:51:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Feb  1 09:51:19 np0005604375 stoic_pare[90582]: pool 'vms' created
Feb  1 09:51:19 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Feb  1 09:51:19 np0005604375 systemd[1]: libpod-4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae.scope: Deactivated successfully.
Feb  1 09:51:19 np0005604375 podman[90566]: 2026-02-01 14:51:19.158814525 +0000 UTC m=+0.794926393 container died 4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae (image=quay.io/ceph/ceph:v20, name=stoic_pare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:19 np0005604375 systemd[1]: var-lib-containers-storage-overlay-99d5497146173086566ccd20a8843ed910f98eb6c3a38fac887723a0d5928e1b-merged.mount: Deactivated successfully.
Feb  1 09:51:19 np0005604375 podman[90566]: 2026-02-01 14:51:19.203584671 +0000 UTC m=+0.839696509 container remove 4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae (image=quay.io/ceph/ceph:v20, name=stoic_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 09:51:19 np0005604375 systemd[1]: libpod-conmon-4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae.scope: Deactivated successfully.
Feb  1 09:51:19 np0005604375 python3[90646]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:19 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:51:19 np0005604375 podman[90647]: 2026-02-01 14:51:19.519130226 +0000 UTC m=+0.045992343 container create a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622 (image=quay.io/ceph/ceph:v20, name=friendly_rubin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  1 09:51:19 np0005604375 systemd[1]: Started libpod-conmon-a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622.scope.
Feb  1 09:51:19 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:19 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b207f6adef79aba507ca9e566ce1fc763c820eba44571638c868215dbd559e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:19 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b207f6adef79aba507ca9e566ce1fc763c820eba44571638c868215dbd559e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:19 np0005604375 podman[90647]: 2026-02-01 14:51:19.495815325 +0000 UTC m=+0.022677442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:19 np0005604375 podman[90647]: 2026-02-01 14:51:19.598979381 +0000 UTC m=+0.125841508 container init a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622 (image=quay.io/ceph/ceph:v20, name=friendly_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 09:51:19 np0005604375 podman[90647]: 2026-02-01 14:51:19.60640439 +0000 UTC m=+0.133266477 container start a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622 (image=quay.io/ceph/ceph:v20, name=friendly_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  1 09:51:19 np0005604375 podman[90647]: 2026-02-01 14:51:19.609975656 +0000 UTC m=+0.136837773 container attach a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622 (image=quay.io/ceph/ceph:v20, name=friendly_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:19 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v39: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  1 09:51:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/803876311' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  1 09:51:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Feb  1 09:51:20 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/304218935' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  1 09:51:20 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/803876311' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  1 09:51:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/803876311' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  1 09:51:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Feb  1 09:51:20 np0005604375 friendly_rubin[90662]: pool 'volumes' created
Feb  1 09:51:20 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Feb  1 09:51:20 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:51:20 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:51:20 np0005604375 systemd[1]: libpod-a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622.scope: Deactivated successfully.
Feb  1 09:51:20 np0005604375 podman[90647]: 2026-02-01 14:51:20.16422067 +0000 UTC m=+0.691082747 container died a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622 (image=quay.io/ceph/ceph:v20, name=friendly_rubin, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  1 09:51:20 np0005604375 systemd[1]: var-lib-containers-storage-overlay-13b207f6adef79aba507ca9e566ce1fc763c820eba44571638c868215dbd559e-merged.mount: Deactivated successfully.
Feb  1 09:51:20 np0005604375 podman[90647]: 2026-02-01 14:51:20.192186288 +0000 UTC m=+0.719048375 container remove a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622 (image=quay.io/ceph/ceph:v20, name=friendly_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:20 np0005604375 systemd[1]: libpod-conmon-a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622.scope: Deactivated successfully.
Feb  1 09:51:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:51:20 np0005604375 python3[90726]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:20 np0005604375 podman[90727]: 2026-02-01 14:51:20.49039987 +0000 UTC m=+0.049465846 container create 5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00 (image=quay.io/ceph/ceph:v20, name=hungry_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  1 09:51:20 np0005604375 systemd[1]: Started libpod-conmon-5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00.scope.
Feb  1 09:51:20 np0005604375 podman[90727]: 2026-02-01 14:51:20.464226285 +0000 UTC m=+0.023292321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:20 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:20 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da76ee4adbc4a74f074a682762ce20522c1e2de735cf899d839b77e09996f969/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:20 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da76ee4adbc4a74f074a682762ce20522c1e2de735cf899d839b77e09996f969/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:20 np0005604375 podman[90727]: 2026-02-01 14:51:20.578125678 +0000 UTC m=+0.137191654 container init 5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00 (image=quay.io/ceph/ceph:v20, name=hungry_kowalevski, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  1 09:51:20 np0005604375 podman[90727]: 2026-02-01 14:51:20.585453545 +0000 UTC m=+0.144519551 container start 5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00 (image=quay.io/ceph/ceph:v20, name=hungry_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:20 np0005604375 podman[90727]: 2026-02-01 14:51:20.589173215 +0000 UTC m=+0.148239201 container attach 5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00 (image=quay.io/ceph/ceph:v20, name=hungry_kowalevski, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  1 09:51:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  1 09:51:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3631477585' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  1 09:51:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Feb  1 09:51:21 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/803876311' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  1 09:51:21 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/3631477585' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  1 09:51:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3631477585' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  1 09:51:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Feb  1 09:51:21 np0005604375 hungry_kowalevski[90741]: pool 'backups' created
Feb  1 09:51:21 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Feb  1 09:51:21 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:51:21 np0005604375 systemd[1]: libpod-5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00.scope: Deactivated successfully.
Feb  1 09:51:21 np0005604375 podman[90727]: 2026-02-01 14:51:21.183086312 +0000 UTC m=+0.742152318 container died 5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00 (image=quay.io/ceph/ceph:v20, name=hungry_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  1 09:51:21 np0005604375 systemd[1]: var-lib-containers-storage-overlay-da76ee4adbc4a74f074a682762ce20522c1e2de735cf899d839b77e09996f969-merged.mount: Deactivated successfully.
Feb  1 09:51:21 np0005604375 podman[90727]: 2026-02-01 14:51:21.220341916 +0000 UTC m=+0.779407882 container remove 5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00 (image=quay.io/ceph/ceph:v20, name=hungry_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:21 np0005604375 systemd[1]: libpod-conmon-5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00.scope: Deactivated successfully.
Feb  1 09:51:21 np0005604375 python3[90805]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:21 np0005604375 podman[90806]: 2026-02-01 14:51:21.504510931 +0000 UTC m=+0.038018587 container create bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b (image=quay.io/ceph/ceph:v20, name=stoic_nobel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  1 09:51:21 np0005604375 systemd[1]: Started libpod-conmon-bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b.scope.
Feb  1 09:51:21 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e5be2672977be384a2ba2d12c7724a6e94f65477c63c4537669a02ee89d91/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e5be2672977be384a2ba2d12c7724a6e94f65477c63c4537669a02ee89d91/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:21 np0005604375 podman[90806]: 2026-02-01 14:51:21.578769661 +0000 UTC m=+0.112277337 container init bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b (image=quay.io/ceph/ceph:v20, name=stoic_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:21 np0005604375 podman[90806]: 2026-02-01 14:51:21.486103266 +0000 UTC m=+0.019610942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:21 np0005604375 podman[90806]: 2026-02-01 14:51:21.582615544 +0000 UTC m=+0.116123200 container start bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b (image=quay.io/ceph/ceph:v20, name=stoic_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  1 09:51:21 np0005604375 podman[90806]: 2026-02-01 14:51:21.58582555 +0000 UTC m=+0.119333206 container attach bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b (image=quay.io/ceph/ceph:v20, name=stoic_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  1 09:51:21 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:51:21 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v42: 4 pgs: 1 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  1 09:51:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/323587100' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  1 09:51:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Feb  1 09:51:22 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/3631477585' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  1 09:51:22 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/323587100' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  1 09:51:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/323587100' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  1 09:51:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Feb  1 09:51:22 np0005604375 stoic_nobel[90821]: pool 'images' created
Feb  1 09:51:22 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Feb  1 09:51:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:51:22 np0005604375 systemd[1]: libpod-bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b.scope: Deactivated successfully.
Feb  1 09:51:22 np0005604375 podman[90806]: 2026-02-01 14:51:22.20849426 +0000 UTC m=+0.742001916 container died bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b (image=quay.io/ceph/ceph:v20, name=stoic_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:22 np0005604375 systemd[1]: var-lib-containers-storage-overlay-d37e5be2672977be384a2ba2d12c7724a6e94f65477c63c4537669a02ee89d91-merged.mount: Deactivated successfully.
Feb  1 09:51:22 np0005604375 podman[90806]: 2026-02-01 14:51:22.240200799 +0000 UTC m=+0.773708455 container remove bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b (image=quay.io/ceph/ceph:v20, name=stoic_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  1 09:51:22 np0005604375 systemd[1]: libpod-conmon-bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b.scope: Deactivated successfully.
Feb  1 09:51:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:51:22 np0005604375 python3[90886]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:22 np0005604375 podman[90887]: 2026-02-01 14:51:22.569089109 +0000 UTC m=+0.050942630 container create fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018 (image=quay.io/ceph/ceph:v20, name=agitated_mclean, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  1 09:51:22 np0005604375 systemd[1]: Started libpod-conmon-fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018.scope.
Feb  1 09:51:22 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:22 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3bed56f6de90ad4ac6d375b69e25122501d1a930cb5b2c2a75246f56c870285/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:22 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3bed56f6de90ad4ac6d375b69e25122501d1a930cb5b2c2a75246f56c870285/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:22 np0005604375 podman[90887]: 2026-02-01 14:51:22.546536111 +0000 UTC m=+0.028389672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:22 np0005604375 podman[90887]: 2026-02-01 14:51:22.646237823 +0000 UTC m=+0.128091404 container init fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018 (image=quay.io/ceph/ceph:v20, name=agitated_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:22 np0005604375 podman[90887]: 2026-02-01 14:51:22.650234532 +0000 UTC m=+0.132088013 container start fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018 (image=quay.io/ceph/ceph:v20, name=agitated_mclean, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:22 np0005604375 podman[90887]: 2026-02-01 14:51:22.654479627 +0000 UTC m=+0.136333128 container attach fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018 (image=quay.io/ceph/ceph:v20, name=agitated_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  1 09:51:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  1 09:51:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3835473991' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  1 09:51:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Feb  1 09:51:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3835473991' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  1 09:51:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Feb  1 09:51:23 np0005604375 agitated_mclean[90902]: pool 'cephfs.cephfs.meta' created
Feb  1 09:51:23 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:51:23 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Feb  1 09:51:23 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/323587100' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  1 09:51:23 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/3835473991' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  1 09:51:23 np0005604375 systemd[1]: libpod-fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018.scope: Deactivated successfully.
Feb  1 09:51:23 np0005604375 podman[90887]: 2026-02-01 14:51:23.21141147 +0000 UTC m=+0.693264961 container died fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018 (image=quay.io/ceph/ceph:v20, name=agitated_mclean, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:23 np0005604375 systemd[1]: var-lib-containers-storage-overlay-e3bed56f6de90ad4ac6d375b69e25122501d1a930cb5b2c2a75246f56c870285-merged.mount: Deactivated successfully.
Feb  1 09:51:23 np0005604375 podman[90887]: 2026-02-01 14:51:23.243882212 +0000 UTC m=+0.725735733 container remove fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018 (image=quay.io/ceph/ceph:v20, name=agitated_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  1 09:51:23 np0005604375 systemd[1]: libpod-conmon-fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018.scope: Deactivated successfully.
Feb  1 09:51:23 np0005604375 python3[90965]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:23 np0005604375 podman[90966]: 2026-02-01 14:51:23.556060527 +0000 UTC m=+0.066456489 container create 270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e (image=quay.io/ceph/ceph:v20, name=zen_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  1 09:51:23 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:51:23 np0005604375 systemd[1]: Started libpod-conmon-270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e.scope.
Feb  1 09:51:23 np0005604375 podman[90966]: 2026-02-01 14:51:23.526704988 +0000 UTC m=+0.037101060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:23 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4331c5c601c012f189a3b20c9ebb33cb49821f083f456698df96b91a5de8b3d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4331c5c601c012f189a3b20c9ebb33cb49821f083f456698df96b91a5de8b3d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:23 np0005604375 podman[90966]: 2026-02-01 14:51:23.647395122 +0000 UTC m=+0.157791104 container init 270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e (image=quay.io/ceph/ceph:v20, name=zen_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Feb  1 09:51:23 np0005604375 podman[90966]: 2026-02-01 14:51:23.651354379 +0000 UTC m=+0.161750341 container start 270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e (image=quay.io/ceph/ceph:v20, name=zen_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:23 np0005604375 podman[90966]: 2026-02-01 14:51:23.654698778 +0000 UTC m=+0.165094770 container attach 270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e (image=quay.io/ceph/ceph:v20, name=zen_grothendieck, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  1 09:51:23 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v45: 6 pgs: 3 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  1 09:51:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1252080328' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  1 09:51:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Feb  1 09:51:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1252080328' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  1 09:51:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Feb  1 09:51:24 np0005604375 zen_grothendieck[90982]: pool 'cephfs.cephfs.data' created
Feb  1 09:51:24 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Feb  1 09:51:24 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:51:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:51:24 np0005604375 systemd[1]: libpod-270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e.scope: Deactivated successfully.
Feb  1 09:51:24 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/3835473991' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  1 09:51:24 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/1252080328' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  1 09:51:24 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/1252080328' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  1 09:51:24 np0005604375 conmon[90982]: conmon 270a21087bdf27c7fc12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e.scope/container/memory.events
Feb  1 09:51:24 np0005604375 podman[91009]: 2026-02-01 14:51:24.258103468 +0000 UTC m=+0.022643192 container died 270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e (image=quay.io/ceph/ceph:v20, name=zen_grothendieck, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  1 09:51:24 np0005604375 systemd[1]: var-lib-containers-storage-overlay-4331c5c601c012f189a3b20c9ebb33cb49821f083f456698df96b91a5de8b3d0-merged.mount: Deactivated successfully.
Feb  1 09:51:24 np0005604375 podman[91009]: 2026-02-01 14:51:24.294928809 +0000 UTC m=+0.059468503 container remove 270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e (image=quay.io/ceph/ceph:v20, name=zen_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 09:51:24 np0005604375 systemd[1]: libpod-conmon-270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e.scope: Deactivated successfully.
Feb  1 09:51:24 np0005604375 python3[91048]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:24 np0005604375 podman[91049]: 2026-02-01 14:51:24.680847786 +0000 UTC m=+0.044228070 container create 97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329 (image=quay.io/ceph/ceph:v20, name=hungry_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  1 09:51:24 np0005604375 systemd[1]: Started libpod-conmon-97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329.scope.
Feb  1 09:51:24 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:24 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e7ab86aab5440f39a151375c05e086034d166ab067197ac23e91aa08e4b76d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:24 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e7ab86aab5440f39a151375c05e086034d166ab067197ac23e91aa08e4b76d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:24 np0005604375 podman[91049]: 2026-02-01 14:51:24.663490972 +0000 UTC m=+0.026871276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:24 np0005604375 podman[91049]: 2026-02-01 14:51:24.773581593 +0000 UTC m=+0.136961957 container init 97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329 (image=quay.io/ceph/ceph:v20, name=hungry_babbage, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:24 np0005604375 podman[91049]: 2026-02-01 14:51:24.778160158 +0000 UTC m=+0.141540452 container start 97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329 (image=quay.io/ceph/ceph:v20, name=hungry_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  1 09:51:24 np0005604375 podman[91049]: 2026-02-01 14:51:24.781359013 +0000 UTC m=+0.144739337 container attach 97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329 (image=quay.io/ceph/ceph:v20, name=hungry_babbage, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  1 09:51:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Feb  1 09:51:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/170303645' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb  1 09:51:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Feb  1 09:51:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/170303645' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb  1 09:51:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Feb  1 09:51:25 np0005604375 hungry_babbage[91064]: enabled application 'rbd' on pool 'vms'
Feb  1 09:51:25 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Feb  1 09:51:25 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:51:25 np0005604375 systemd[1]: libpod-97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329.scope: Deactivated successfully.
Feb  1 09:51:25 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/170303645' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb  1 09:51:25 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/170303645' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb  1 09:51:25 np0005604375 podman[91090]: 2026-02-01 14:51:25.265289035 +0000 UTC m=+0.031929567 container died 97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329 (image=quay.io/ceph/ceph:v20, name=hungry_babbage, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:25 np0005604375 systemd[1]: var-lib-containers-storage-overlay-b6e7ab86aab5440f39a151375c05e086034d166ab067197ac23e91aa08e4b76d-merged.mount: Deactivated successfully.
Feb  1 09:51:25 np0005604375 podman[91090]: 2026-02-01 14:51:25.299640172 +0000 UTC m=+0.066280674 container remove 97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329 (image=quay.io/ceph/ceph:v20, name=hungry_babbage, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 09:51:25 np0005604375 systemd[1]: libpod-conmon-97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329.scope: Deactivated successfully.
Feb  1 09:51:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:51:25 np0005604375 python3[91130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:25 np0005604375 podman[91131]: 2026-02-01 14:51:25.618527606 +0000 UTC m=+0.038319916 container create 7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  1 09:51:25 np0005604375 systemd[1]: Started libpod-conmon-7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f.scope.
Feb  1 09:51:25 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:25 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141e26b99d29bb15c50d0a6afc1f8f0661a1efe5fa1e3754e04a80f519039271/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:25 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141e26b99d29bb15c50d0a6afc1f8f0661a1efe5fa1e3754e04a80f519039271/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:25 np0005604375 podman[91131]: 2026-02-01 14:51:25.683819539 +0000 UTC m=+0.103611859 container init 7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  1 09:51:25 np0005604375 podman[91131]: 2026-02-01 14:51:25.688342943 +0000 UTC m=+0.108135273 container start 7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  1 09:51:25 np0005604375 podman[91131]: 2026-02-01 14:51:25.691169487 +0000 UTC m=+0.110961817 container attach 7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  1 09:51:25 np0005604375 podman[91131]: 2026-02-01 14:51:25.599149012 +0000 UTC m=+0.018941392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:25 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v48: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Feb  1 09:51:26 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3126520532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb  1 09:51:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Feb  1 09:51:26 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3126520532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb  1 09:51:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Feb  1 09:51:26 np0005604375 goofy_proskuriakova[91146]: enabled application 'rbd' on pool 'volumes'
Feb  1 09:51:26 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Feb  1 09:51:26 np0005604375 systemd[1]: libpod-7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f.scope: Deactivated successfully.
Feb  1 09:51:26 np0005604375 podman[91131]: 2026-02-01 14:51:26.219110022 +0000 UTC m=+0.638902342 container died 7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  1 09:51:26 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/3126520532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb  1 09:51:26 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/3126520532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb  1 09:51:26 np0005604375 systemd[1]: var-lib-containers-storage-overlay-141e26b99d29bb15c50d0a6afc1f8f0661a1efe5fa1e3754e04a80f519039271-merged.mount: Deactivated successfully.
Feb  1 09:51:26 np0005604375 podman[91131]: 2026-02-01 14:51:26.253365897 +0000 UTC m=+0.673158217 container remove 7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Feb  1 09:51:26 np0005604375 systemd[1]: libpod-conmon-7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f.scope: Deactivated successfully.
Feb  1 09:51:26 np0005604375 python3[91208]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:26 np0005604375 podman[91209]: 2026-02-01 14:51:26.531899485 +0000 UTC m=+0.043689354 container create aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04 (image=quay.io/ceph/ceph:v20, name=gifted_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:26 np0005604375 systemd[1]: Started libpod-conmon-aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04.scope.
Feb  1 09:51:26 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:26 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d4ccb25f4e315d14e8b08854be04215980aedd0aa8790ec631d666b145a5eb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:26 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d4ccb25f4e315d14e8b08854be04215980aedd0aa8790ec631d666b145a5eb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:26 np0005604375 podman[91209]: 2026-02-01 14:51:26.5978959 +0000 UTC m=+0.109685779 container init aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04 (image=quay.io/ceph/ceph:v20, name=gifted_faraday, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:26 np0005604375 podman[91209]: 2026-02-01 14:51:26.5087575 +0000 UTC m=+0.020547449 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:26 np0005604375 podman[91209]: 2026-02-01 14:51:26.604592468 +0000 UTC m=+0.116382377 container start aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04 (image=quay.io/ceph/ceph:v20, name=gifted_faraday, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  1 09:51:26 np0005604375 podman[91209]: 2026-02-01 14:51:26.609013249 +0000 UTC m=+0.120803128 container attach aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04 (image=quay.io/ceph/ceph:v20, name=gifted_faraday, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  1 09:51:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Feb  1 09:51:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2657114908' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb  1 09:51:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Feb  1 09:51:27 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2657114908' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb  1 09:51:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2657114908' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb  1 09:51:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Feb  1 09:51:27 np0005604375 gifted_faraday[91225]: enabled application 'rbd' on pool 'backups'
Feb  1 09:51:27 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Feb  1 09:51:27 np0005604375 systemd[1]: libpod-aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04.scope: Deactivated successfully.
Feb  1 09:51:27 np0005604375 podman[91209]: 2026-02-01 14:51:27.271652323 +0000 UTC m=+0.783442192 container died aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04 (image=quay.io/ceph/ceph:v20, name=gifted_faraday, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  1 09:51:27 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a0d4ccb25f4e315d14e8b08854be04215980aedd0aa8790ec631d666b145a5eb-merged.mount: Deactivated successfully.
Feb  1 09:51:27 np0005604375 podman[91209]: 2026-02-01 14:51:27.310551135 +0000 UTC m=+0.822341014 container remove aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04 (image=quay.io/ceph/ceph:v20, name=gifted_faraday, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  1 09:51:27 np0005604375 systemd[1]: libpod-conmon-aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04.scope: Deactivated successfully.
Feb  1 09:51:27 np0005604375 python3[91286]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:27 np0005604375 podman[91287]: 2026-02-01 14:51:27.592478854 +0000 UTC m=+0.032820513 container create 1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:27 np0005604375 systemd[1]: Started libpod-conmon-1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a.scope.
Feb  1 09:51:27 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:27 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e490f8634d4deb479040b402c26c0d04d58784c750dcc5752857ac1862fca9a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:27 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e490f8634d4deb479040b402c26c0d04d58784c750dcc5752857ac1862fca9a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:27 np0005604375 podman[91287]: 2026-02-01 14:51:27.652865813 +0000 UTC m=+0.093207552 container init 1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  1 09:51:27 np0005604375 podman[91287]: 2026-02-01 14:51:27.657687995 +0000 UTC m=+0.098029674 container start 1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:27 np0005604375 podman[91287]: 2026-02-01 14:51:27.660772407 +0000 UTC m=+0.101114076 container attach 1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:27 np0005604375 podman[91287]: 2026-02-01 14:51:27.576746548 +0000 UTC m=+0.017088227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:27 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v51: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Feb  1 09:51:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/266627564' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb  1 09:51:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Feb  1 09:51:28 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2657114908' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb  1 09:51:28 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/266627564' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb  1 09:51:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/266627564' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb  1 09:51:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Feb  1 09:51:28 np0005604375 goofy_proskuriakova[91302]: enabled application 'rbd' on pool 'images'
Feb  1 09:51:28 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Feb  1 09:51:28 np0005604375 systemd[1]: libpod-1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a.scope: Deactivated successfully.
Feb  1 09:51:28 np0005604375 podman[91287]: 2026-02-01 14:51:28.285492667 +0000 UTC m=+0.725834326 container died 1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:28 np0005604375 systemd[1]: var-lib-containers-storage-overlay-e490f8634d4deb479040b402c26c0d04d58784c750dcc5752857ac1862fca9a6-merged.mount: Deactivated successfully.
Feb  1 09:51:28 np0005604375 podman[91287]: 2026-02-01 14:51:28.320246206 +0000 UTC m=+0.760587875 container remove 1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Feb  1 09:51:28 np0005604375 systemd[1]: libpod-conmon-1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a.scope: Deactivated successfully.
Feb  1 09:51:28 np0005604375 python3[91363]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:28 np0005604375 podman[91364]: 2026-02-01 14:51:28.596419764 +0000 UTC m=+0.045727825 container create 293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d (image=quay.io/ceph/ceph:v20, name=focused_liskov, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Feb  1 09:51:28 np0005604375 systemd[1]: Started libpod-conmon-293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d.scope.
Feb  1 09:51:28 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:28 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f667c51d262806c66edd602b2aaa2ea5dc7017748ac30070591f3fd81866d5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:28 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f667c51d262806c66edd602b2aaa2ea5dc7017748ac30070591f3fd81866d5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:28 np0005604375 podman[91364]: 2026-02-01 14:51:28.655157464 +0000 UTC m=+0.104465625 container init 293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d (image=quay.io/ceph/ceph:v20, name=focused_liskov, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:28 np0005604375 podman[91364]: 2026-02-01 14:51:28.568334323 +0000 UTC m=+0.017642464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:28 np0005604375 podman[91364]: 2026-02-01 14:51:28.665320725 +0000 UTC m=+0.114628776 container start 293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d (image=quay.io/ceph/ceph:v20, name=focused_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 09:51:28 np0005604375 podman[91364]: 2026-02-01 14:51:28.668281913 +0000 UTC m=+0.117589984 container attach 293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d (image=quay.io/ceph/ceph:v20, name=focused_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  1 09:51:29 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Feb  1 09:51:29 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2817273916' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb  1 09:51:29 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Feb  1 09:51:29 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2817273916' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb  1 09:51:29 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Feb  1 09:51:29 np0005604375 focused_liskov[91380]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Feb  1 09:51:29 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Feb  1 09:51:29 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/266627564' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb  1 09:51:29 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2817273916' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb  1 09:51:29 np0005604375 systemd[1]: libpod-293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d.scope: Deactivated successfully.
Feb  1 09:51:29 np0005604375 podman[91364]: 2026-02-01 14:51:29.305921346 +0000 UTC m=+0.755229437 container died 293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d (image=quay.io/ceph/ceph:v20, name=focused_liskov, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:29 np0005604375 systemd[1]: var-lib-containers-storage-overlay-03f667c51d262806c66edd602b2aaa2ea5dc7017748ac30070591f3fd81866d5-merged.mount: Deactivated successfully.
Feb  1 09:51:29 np0005604375 podman[91364]: 2026-02-01 14:51:29.348504638 +0000 UTC m=+0.797812689 container remove 293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d (image=quay.io/ceph/ceph:v20, name=focused_liskov, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:29 np0005604375 systemd[1]: libpod-conmon-293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d.scope: Deactivated successfully.
Feb  1 09:51:29 np0005604375 python3[91444]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:29 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v54: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:29 np0005604375 podman[91445]: 2026-02-01 14:51:29.749285687 +0000 UTC m=+0.058376420 container create 47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873 (image=quay.io/ceph/ceph:v20, name=affectionate_merkle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  1 09:51:29 np0005604375 systemd[1]: Started libpod-conmon-47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873.scope.
Feb  1 09:51:29 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:29 np0005604375 podman[91445]: 2026-02-01 14:51:29.722187444 +0000 UTC m=+0.031278247 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:29 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb56ee10afa71e2a96acb5b95e317e32267dfc4973951cb355c9cfd5cb2b56e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:29 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb56ee10afa71e2a96acb5b95e317e32267dfc4973951cb355c9cfd5cb2b56e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:29 np0005604375 podman[91445]: 2026-02-01 14:51:29.838133498 +0000 UTC m=+0.147224241 container init 47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873 (image=quay.io/ceph/ceph:v20, name=affectionate_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  1 09:51:29 np0005604375 podman[91445]: 2026-02-01 14:51:29.845116814 +0000 UTC m=+0.154207557 container start 47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873 (image=quay.io/ceph/ceph:v20, name=affectionate_merkle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  1 09:51:29 np0005604375 podman[91445]: 2026-02-01 14:51:29.848069332 +0000 UTC m=+0.157160065 container attach 47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873 (image=quay.io/ceph/ceph:v20, name=affectionate_merkle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  1 09:51:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Feb  1 09:51:30 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2177062579' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb  1 09:51:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Feb  1 09:51:30 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2177062579' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb  1 09:51:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Feb  1 09:51:30 np0005604375 affectionate_merkle[91461]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Feb  1 09:51:30 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2817273916' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb  1 09:51:30 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2177062579' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb  1 09:51:30 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Feb  1 09:51:30 np0005604375 systemd[1]: libpod-47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873.scope: Deactivated successfully.
Feb  1 09:51:30 np0005604375 podman[91486]: 2026-02-01 14:51:30.344891155 +0000 UTC m=+0.019435506 container died 47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873 (image=quay.io/ceph/ceph:v20, name=affectionate_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:30 np0005604375 systemd[1]: var-lib-containers-storage-overlay-deb56ee10afa71e2a96acb5b95e317e32267dfc4973951cb355c9cfd5cb2b56e-merged.mount: Deactivated successfully.
Feb  1 09:51:30 np0005604375 podman[91486]: 2026-02-01 14:51:30.371750531 +0000 UTC m=+0.046294822 container remove 47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873 (image=quay.io/ceph/ceph:v20, name=affectionate_merkle, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  1 09:51:30 np0005604375 systemd[1]: libpod-conmon-47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873.scope: Deactivated successfully.
Feb  1 09:51:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:51:31 np0005604375 python3[91576]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:51:31 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2177062579' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb  1 09:51:31 np0005604375 python3[91647]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957490.942764-36514-17192024930168/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:51:31 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v56: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:32 np0005604375 python3[91749]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:51:32 np0005604375 python3[91824]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957491.895641-36528-243667670197410/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=e13ba4992094cac129dd8dc4109da05eb92e153b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:51:32 np0005604375 python3[91874]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:32 np0005604375 podman[91875]: 2026-02-01 14:51:32.986594197 +0000 UTC m=+0.053129424 container create a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594 (image=quay.io/ceph/ceph:v20, name=awesome_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:33 np0005604375 systemd[1]: Started libpod-conmon-a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594.scope.
Feb  1 09:51:33 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:33 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752aa02ed622f77d64719cf7ab8c74238745467fc7f8f04240a000c448df54f5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:33 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752aa02ed622f77d64719cf7ab8c74238745467fc7f8f04240a000c448df54f5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:33 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752aa02ed622f77d64719cf7ab8c74238745467fc7f8f04240a000c448df54f5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:33 np0005604375 podman[91875]: 2026-02-01 14:51:32.964269526 +0000 UTC m=+0.030804813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:33 np0005604375 podman[91875]: 2026-02-01 14:51:33.078596762 +0000 UTC m=+0.145132059 container init a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594 (image=quay.io/ceph/ceph:v20, name=awesome_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 09:51:33 np0005604375 podman[91875]: 2026-02-01 14:51:33.085211478 +0000 UTC m=+0.151746705 container start a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594 (image=quay.io/ceph/ceph:v20, name=awesome_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 09:51:33 np0005604375 podman[91875]: 2026-02-01 14:51:33.088453974 +0000 UTC m=+0.154989291 container attach a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594 (image=quay.io/ceph/ceph:v20, name=awesome_ellis, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  1 09:51:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb  1 09:51:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/842859654' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  1 09:51:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/842859654' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  1 09:51:33 np0005604375 awesome_ellis[91890]: 
Feb  1 09:51:33 np0005604375 awesome_ellis[91890]: [global]
Feb  1 09:51:33 np0005604375 awesome_ellis[91890]: #011fsid = 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb  1 09:51:33 np0005604375 awesome_ellis[91890]: #011mon_host = 192.168.122.100
Feb  1 09:51:33 np0005604375 awesome_ellis[91890]: #011rgw_keystone_api_version = 3
Feb  1 09:51:33 np0005604375 systemd[1]: libpod-a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594.scope: Deactivated successfully.
Feb  1 09:51:33 np0005604375 podman[91875]: 2026-02-01 14:51:33.502542967 +0000 UTC m=+0.569078214 container died a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594 (image=quay.io/ceph/ceph:v20, name=awesome_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  1 09:51:33 np0005604375 systemd[1]: var-lib-containers-storage-overlay-752aa02ed622f77d64719cf7ab8c74238745467fc7f8f04240a000c448df54f5-merged.mount: Deactivated successfully.
Feb  1 09:51:33 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/842859654' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  1 09:51:33 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/842859654' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  1 09:51:33 np0005604375 podman[91875]: 2026-02-01 14:51:33.547598951 +0000 UTC m=+0.614134168 container remove a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594 (image=quay.io/ceph/ceph:v20, name=awesome_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  1 09:51:33 np0005604375 systemd[1]: libpod-conmon-a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594.scope: Deactivated successfully.
Feb  1 09:51:33 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:33 np0005604375 python3[92002]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:33 np0005604375 podman[92034]: 2026-02-01 14:51:33.929381478 +0000 UTC m=+0.049118716 container create b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b (image=quay.io/ceph/ceph:v20, name=great_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  1 09:51:33 np0005604375 systemd[1]: Started libpod-conmon-b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b.scope.
Feb  1 09:51:33 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:33 np0005604375 podman[92034]: 2026-02-01 14:51:33.901551314 +0000 UTC m=+0.021288602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:33 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c849922dae760c44e40791033daf2dc149a84b806044f4c1c56c3980597952/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:33 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c849922dae760c44e40791033daf2dc149a84b806044f4c1c56c3980597952/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:33 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c849922dae760c44e40791033daf2dc149a84b806044f4c1c56c3980597952/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:34 np0005604375 podman[92034]: 2026-02-01 14:51:34.022679861 +0000 UTC m=+0.142417099 container init b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b (image=quay.io/ceph/ceph:v20, name=great_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb  1 09:51:34 np0005604375 podman[92034]: 2026-02-01 14:51:34.026988388 +0000 UTC m=+0.146725586 container start b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b (image=quay.io/ceph/ceph:v20, name=great_dhawan, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:34 np0005604375 podman[92034]: 2026-02-01 14:51:34.030336088 +0000 UTC m=+0.150073336 container attach b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b (image=quay.io/ceph/ceph:v20, name=great_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:34 np0005604375 podman[92064]: 2026-02-01 14:51:34.03817355 +0000 UTC m=+0.073651953 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:34 np0005604375 podman[92064]: 2026-02-01 14:51:34.148636461 +0000 UTC m=+0.184114844 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Feb  1 09:51:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2639964740' entity='client.admin' 
Feb  1 09:51:34 np0005604375 great_dhawan[92071]: set ssl_option
Feb  1 09:51:34 np0005604375 systemd[1]: libpod-b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b.scope: Deactivated successfully.
Feb  1 09:51:34 np0005604375 podman[92034]: 2026-02-01 14:51:34.590652082 +0000 UTC m=+0.710389290 container died b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b (image=quay.io/ceph/ceph:v20, name=great_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  1 09:51:34 np0005604375 systemd[1]: var-lib-containers-storage-overlay-88c849922dae760c44e40791033daf2dc149a84b806044f4c1c56c3980597952-merged.mount: Deactivated successfully.
Feb  1 09:51:34 np0005604375 podman[92034]: 2026-02-01 14:51:34.623744032 +0000 UTC m=+0.743481230 container remove b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b (image=quay.io/ceph/ceph:v20, name=great_dhawan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 09:51:34 np0005604375 systemd[1]: libpod-conmon-b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b.scope: Deactivated successfully.
Feb  1 09:51:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:34 np0005604375 python3[92314]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:34 np0005604375 podman[92323]: 2026-02-01 14:51:34.94987711 +0000 UTC m=+0.038677256 container create c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2 (image=quay.io/ceph/ceph:v20, name=pensive_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  1 09:51:34 np0005604375 systemd[1]: Started libpod-conmon-c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2.scope.
Feb  1 09:51:34 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732431039b2f41cf916124848bb32255847b7cb1366a51c3c4c613be1969a98a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732431039b2f41cf916124848bb32255847b7cb1366a51c3c4c613be1969a98a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732431039b2f41cf916124848bb32255847b7cb1366a51c3c4c613be1969a98a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:35 np0005604375 podman[92323]: 2026-02-01 14:51:35.018363919 +0000 UTC m=+0.107164075 container init c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2 (image=quay.io/ceph/ceph:v20, name=pensive_darwin, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:35 np0005604375 podman[92323]: 2026-02-01 14:51:35.024290514 +0000 UTC m=+0.113090670 container start c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2 (image=quay.io/ceph/ceph:v20, name=pensive_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:35 np0005604375 podman[92323]: 2026-02-01 14:51:35.028152138 +0000 UTC m=+0.116952324 container attach c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2 (image=quay.io/ceph/ceph:v20, name=pensive_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  1 09:51:35 np0005604375 podman[92323]: 2026-02-01 14:51:34.932367872 +0000 UTC m=+0.021168058 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:51:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:51:35 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Feb  1 09:51:35 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:35 np0005604375 pensive_darwin[92341]: Scheduled rgw.rgw update...
Feb  1 09:51:35 np0005604375 systemd[1]: libpod-c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2.scope: Deactivated successfully.
Feb  1 09:51:35 np0005604375 podman[92323]: 2026-02-01 14:51:35.485207923 +0000 UTC m=+0.574008139 container died c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2 (image=quay.io/ceph/ceph:v20, name=pensive_darwin, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:35 np0005604375 systemd[1]: var-lib-containers-storage-overlay-732431039b2f41cf916124848bb32255847b7cb1366a51c3c4c613be1969a98a-merged.mount: Deactivated successfully.
Feb  1 09:51:35 np0005604375 podman[92323]: 2026-02-01 14:51:35.522336772 +0000 UTC m=+0.611136948 container remove c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2 (image=quay.io/ceph/ceph:v20, name=pensive_darwin, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:35 np0005604375 systemd[1]: libpod-conmon-c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2.scope: Deactivated successfully.
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2639964740' entity='client.admin' 
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:51:35 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:35 np0005604375 podman[92471]: 2026-02-01 14:51:35.581953428 +0000 UTC m=+0.038601224 container create 938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_antonelli, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 09:51:35 np0005604375 systemd[1]: Started libpod-conmon-938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa.scope.
Feb  1 09:51:35 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:35 np0005604375 podman[92471]: 2026-02-01 14:51:35.632845615 +0000 UTC m=+0.089493431 container init 938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_antonelli, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:35 np0005604375 podman[92471]: 2026-02-01 14:51:35.637497503 +0000 UTC m=+0.094145299 container start 938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  1 09:51:35 np0005604375 agitated_antonelli[92488]: 167 167
Feb  1 09:51:35 np0005604375 podman[92471]: 2026-02-01 14:51:35.640568634 +0000 UTC m=+0.097216470 container attach 938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  1 09:51:35 np0005604375 systemd[1]: libpod-938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa.scope: Deactivated successfully.
Feb  1 09:51:35 np0005604375 podman[92471]: 2026-02-01 14:51:35.64383065 +0000 UTC m=+0.100478466 container died 938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:35 np0005604375 podman[92471]: 2026-02-01 14:51:35.559477842 +0000 UTC m=+0.016125658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:35 np0005604375 systemd[1]: var-lib-containers-storage-overlay-023880fe4a589d4bf6962a02dcab0f06085b284e71c6dfcce5401ec7cc2617a3-merged.mount: Deactivated successfully.
Feb  1 09:51:35 np0005604375 podman[92471]: 2026-02-01 14:51:35.682850216 +0000 UTC m=+0.139498032 container remove 938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  1 09:51:35 np0005604375 systemd[1]: libpod-conmon-938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa.scope: Deactivated successfully.
Feb  1 09:51:35 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:35 np0005604375 podman[92511]: 2026-02-01 14:51:35.823625585 +0000 UTC m=+0.061357898 container create d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  1 09:51:35 np0005604375 podman[92511]: 2026-02-01 14:51:35.797017657 +0000 UTC m=+0.034750020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:35 np0005604375 systemd[1]: Started libpod-conmon-d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2.scope.
Feb  1 09:51:35 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a545440b575b242e28a14340f37e6a79dd59f7c65e2d2f2a6b295d404480ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a545440b575b242e28a14340f37e6a79dd59f7c65e2d2f2a6b295d404480ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a545440b575b242e28a14340f37e6a79dd59f7c65e2d2f2a6b295d404480ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a545440b575b242e28a14340f37e6a79dd59f7c65e2d2f2a6b295d404480ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a545440b575b242e28a14340f37e6a79dd59f7c65e2d2f2a6b295d404480ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:35 np0005604375 podman[92511]: 2026-02-01 14:51:35.949230215 +0000 UTC m=+0.186962578 container init d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:35 np0005604375 podman[92511]: 2026-02-01 14:51:35.963805126 +0000 UTC m=+0.201537449 container start d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  1 09:51:35 np0005604375 podman[92511]: 2026-02-01 14:51:35.968266518 +0000 UTC m=+0.205998841 container attach d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  1 09:51:36 np0005604375 python3[92610]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:51:36 np0005604375 confident_dirac[92527]: --> passed data devices: 0 physical, 3 LVM
Feb  1 09:51:36 np0005604375 confident_dirac[92527]: --> All data devices are unavailable
Feb  1 09:51:36 np0005604375 systemd[1]: libpod-d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2.scope: Deactivated successfully.
Feb  1 09:51:36 np0005604375 podman[92511]: 2026-02-01 14:51:36.465144843 +0000 UTC m=+0.702877156 container died d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:36 np0005604375 systemd[1]: var-lib-containers-storage-overlay-77a545440b575b242e28a14340f37e6a79dd59f7c65e2d2f2a6b295d404480ee-merged.mount: Deactivated successfully.
Feb  1 09:51:36 np0005604375 podman[92511]: 2026-02-01 14:51:36.517791942 +0000 UTC m=+0.755524265 container remove d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Feb  1 09:51:36 np0005604375 systemd[1]: libpod-conmon-d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2.scope: Deactivated successfully.
Feb  1 09:51:36 np0005604375 ceph-mon[75179]: Saving service rgw.rgw spec with placement compute-0
Feb  1 09:51:36 np0005604375 python3[92700]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957496.0966663-36569-87450757092027/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:51:36 np0005604375 podman[92812]: 2026-02-01 14:51:36.953604649 +0000 UTC m=+0.047103206 container create 286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  1 09:51:36 np0005604375 systemd[1]: Started libpod-conmon-286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c.scope.
Feb  1 09:51:37 np0005604375 python3[92824]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:37 np0005604375 podman[92812]: 2026-02-01 14:51:36.938011897 +0000 UTC m=+0.031510494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:37 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:37 np0005604375 podman[92812]: 2026-02-01 14:51:37.05529096 +0000 UTC m=+0.148789517 container init 286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:37 np0005604375 podman[92812]: 2026-02-01 14:51:37.062183504 +0000 UTC m=+0.155682061 container start 286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_margulis, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 09:51:37 np0005604375 nice_margulis[92834]: 167 167
Feb  1 09:51:37 np0005604375 systemd[1]: libpod-286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c.scope: Deactivated successfully.
Feb  1 09:51:37 np0005604375 podman[92812]: 2026-02-01 14:51:37.065330388 +0000 UTC m=+0.158828945 container attach 286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  1 09:51:37 np0005604375 podman[92812]: 2026-02-01 14:51:37.065527833 +0000 UTC m=+0.159026390 container died 286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_margulis, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  1 09:51:37 np0005604375 podman[92837]: 2026-02-01 14:51:37.083729332 +0000 UTC m=+0.044506749 container create 72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2 (image=quay.io/ceph/ceph:v20, name=kind_ptolemy, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  1 09:51:37 np0005604375 systemd[1]: var-lib-containers-storage-overlay-b45aed60b961da83f98c0d514c260e4349ad08b57e6398ef538c22b9ae9cd4f9-merged.mount: Deactivated successfully.
Feb  1 09:51:37 np0005604375 podman[92812]: 2026-02-01 14:51:37.105485217 +0000 UTC m=+0.198983814 container remove 286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:37 np0005604375 systemd[1]: libpod-conmon-286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c.scope: Deactivated successfully.
Feb  1 09:51:37 np0005604375 systemd[1]: Started libpod-conmon-72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2.scope.
Feb  1 09:51:37 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8f30107e15c369db2b82f671cde88e2184dd8b3749607bfdb91eaa275d776f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8f30107e15c369db2b82f671cde88e2184dd8b3749607bfdb91eaa275d776f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8f30107e15c369db2b82f671cde88e2184dd8b3749607bfdb91eaa275d776f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:37 np0005604375 podman[92837]: 2026-02-01 14:51:37.062558155 +0000 UTC m=+0.023335612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:37 np0005604375 podman[92837]: 2026-02-01 14:51:37.173359937 +0000 UTC m=+0.134137444 container init 72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2 (image=quay.io/ceph/ceph:v20, name=kind_ptolemy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  1 09:51:37 np0005604375 podman[92837]: 2026-02-01 14:51:37.179119987 +0000 UTC m=+0.139897404 container start 72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2 (image=quay.io/ceph/ceph:v20, name=kind_ptolemy, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  1 09:51:37 np0005604375 podman[92837]: 2026-02-01 14:51:37.182045404 +0000 UTC m=+0.142822861 container attach 72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2 (image=quay.io/ceph/ceph:v20, name=kind_ptolemy, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:37 np0005604375 podman[92876]: 2026-02-01 14:51:37.247735859 +0000 UTC m=+0.049973391 container create 7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  1 09:51:37 np0005604375 systemd[1]: Started libpod-conmon-7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce.scope.
Feb  1 09:51:37 np0005604375 podman[92876]: 2026-02-01 14:51:37.221801941 +0000 UTC m=+0.024039523 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:37 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfbaa9ad974d741bf65b021d1a031b17226a16f74239a4996ecd89f6758fd5a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfbaa9ad974d741bf65b021d1a031b17226a16f74239a4996ecd89f6758fd5a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfbaa9ad974d741bf65b021d1a031b17226a16f74239a4996ecd89f6758fd5a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfbaa9ad974d741bf65b021d1a031b17226a16f74239a4996ecd89f6758fd5a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:37 np0005604375 podman[92876]: 2026-02-01 14:51:37.356443579 +0000 UTC m=+0.158681121 container init 7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 09:51:37 np0005604375 podman[92876]: 2026-02-01 14:51:37.364380264 +0000 UTC m=+0.166617766 container start 7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  1 09:51:37 np0005604375 podman[92876]: 2026-02-01 14:51:37.36797311 +0000 UTC m=+0.170210712 container attach 7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  1 09:51:37 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:51:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb  1 09:51:37 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0[75175]: 2026-02-01T14:51:37.584+0000 7f813d74e640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).mds e2 new map
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-02-01T14:51:37:585930+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-01T14:51:37.585458+0000#012modified#0112026-02-01T14:51:37.585459+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Feb  1 09:51:37 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb  1 09:51:37 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb  1 09:51:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb  1 09:51:37 np0005604375 pensive_golick[92912]: {
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:    "0": [
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:        {
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "devices": [
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "/dev/loop3"
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            ],
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_name": "ceph_lv0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_size": "21470642176",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "name": "ceph_lv0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "tags": {
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.crush_device_class": "",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.encrypted": "0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.osd_id": "0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.type": "block",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.vdo": "0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.with_tpm": "0"
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            },
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "type": "block",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "vg_name": "ceph_vg0"
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:        }
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:    ],
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:    "1": [
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:        {
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "devices": [
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "/dev/loop4"
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            ],
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_name": "ceph_lv1",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_size": "21470642176",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "name": "ceph_lv1",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "tags": {
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.crush_device_class": "",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.encrypted": "0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.osd_id": "1",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.type": "block",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.vdo": "0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.with_tpm": "0"
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            },
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "type": "block",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "vg_name": "ceph_vg1"
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:        }
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:    ],
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:    "2": [
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:        {
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "devices": [
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "/dev/loop5"
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            ],
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_name": "ceph_lv2",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_size": "21470642176",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "name": "ceph_lv2",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "tags": {
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.crush_device_class": "",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.encrypted": "0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.osd_id": "2",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.type": "block",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.vdo": "0",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:                "ceph.with_tpm": "0"
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            },
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "type": "block",
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:            "vg_name": "ceph_vg2"
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:        }
Feb  1 09:51:37 np0005604375 pensive_golick[92912]:    ]
Feb  1 09:51:37 np0005604375 pensive_golick[92912]: }
Feb  1 09:51:37 np0005604375 systemd[1]: libpod-72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2.scope: Deactivated successfully.
Feb  1 09:51:37 np0005604375 podman[92837]: 2026-02-01 14:51:37.626776535 +0000 UTC m=+0.587553962 container died 72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2 (image=quay.io/ceph/ceph:v20, name=kind_ptolemy, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  1 09:51:37 np0005604375 systemd[1]: var-lib-containers-storage-overlay-de8f30107e15c369db2b82f671cde88e2184dd8b3749607bfdb91eaa275d776f-merged.mount: Deactivated successfully.
Feb  1 09:51:37 np0005604375 systemd[1]: libpod-7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce.scope: Deactivated successfully.
Feb  1 09:51:37 np0005604375 podman[92837]: 2026-02-01 14:51:37.657309289 +0000 UTC m=+0.618086706 container remove 72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2 (image=quay.io/ceph/ceph:v20, name=kind_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:37 np0005604375 podman[92876]: 2026-02-01 14:51:37.659662538 +0000 UTC m=+0.461900060 container died 7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  1 09:51:37 np0005604375 systemd[1]: libpod-conmon-72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2.scope: Deactivated successfully.
Feb  1 09:51:37 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:37 np0005604375 podman[92876]: 2026-02-01 14:51:37.709468633 +0000 UTC m=+0.511706165 container remove 7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:37 np0005604375 systemd[1]: libpod-conmon-7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce.scope: Deactivated successfully.
Feb  1 09:51:37 np0005604375 systemd[1]: var-lib-containers-storage-overlay-bfbaa9ad974d741bf65b021d1a031b17226a16f74239a4996ecd89f6758fd5a4-merged.mount: Deactivated successfully.
Feb  1 09:51:37 np0005604375 python3[92999]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:38 np0005604375 podman[93026]: 2026-02-01 14:51:38.03521659 +0000 UTC m=+0.049672372 container create e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24 (image=quay.io/ceph/ceph:v20, name=cool_thompson, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  1 09:51:38 np0005604375 systemd[1]: Started libpod-conmon-e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24.scope.
Feb  1 09:51:38 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35fe62f2f994ae68bdbe3a1643b3429133b37ca57a2ec3b6d272fc2dcbb76727/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35fe62f2f994ae68bdbe3a1643b3429133b37ca57a2ec3b6d272fc2dcbb76727/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35fe62f2f994ae68bdbe3a1643b3429133b37ca57a2ec3b6d272fc2dcbb76727/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:38 np0005604375 podman[93026]: 2026-02-01 14:51:38.015743954 +0000 UTC m=+0.030199836 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:38 np0005604375 podman[93026]: 2026-02-01 14:51:38.11318983 +0000 UTC m=+0.127645572 container init e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24 (image=quay.io/ceph/ceph:v20, name=cool_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  1 09:51:38 np0005604375 podman[93026]: 2026-02-01 14:51:38.118087665 +0000 UTC m=+0.132543447 container start e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24 (image=quay.io/ceph/ceph:v20, name=cool_thompson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  1 09:51:38 np0005604375 podman[93026]: 2026-02-01 14:51:38.121433834 +0000 UTC m=+0.135889596 container attach e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24 (image=quay.io/ceph/ceph:v20, name=cool_thompson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  1 09:51:38 np0005604375 podman[93057]: 2026-02-01 14:51:38.16350325 +0000 UTC m=+0.064253124 container create a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Feb  1 09:51:38 np0005604375 systemd[1]: Started libpod-conmon-a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d.scope.
Feb  1 09:51:38 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:38 np0005604375 podman[93057]: 2026-02-01 14:51:38.226046112 +0000 UTC m=+0.126796026 container init a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:38 np0005604375 podman[93057]: 2026-02-01 14:51:38.13683129 +0000 UTC m=+0.037581244 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:38 np0005604375 podman[93057]: 2026-02-01 14:51:38.23137884 +0000 UTC m=+0.132128714 container start a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  1 09:51:38 np0005604375 bold_bardeen[93074]: 167 167
Feb  1 09:51:38 np0005604375 systemd[1]: libpod-a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d.scope: Deactivated successfully.
Feb  1 09:51:38 np0005604375 podman[93057]: 2026-02-01 14:51:38.234283956 +0000 UTC m=+0.135033880 container attach a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  1 09:51:38 np0005604375 podman[93057]: 2026-02-01 14:51:38.2347572 +0000 UTC m=+0.135507084 container died a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:38 np0005604375 systemd[1]: var-lib-containers-storage-overlay-b8e9321d08d1c2ca3f4ddf1bb2d0cec2c64fbe9face3f8bf354aae1f8da9f8f4-merged.mount: Deactivated successfully.
Feb  1 09:51:38 np0005604375 podman[93057]: 2026-02-01 14:51:38.268822999 +0000 UTC m=+0.169572873 container remove a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:38 np0005604375 systemd[1]: libpod-conmon-a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d.scope: Deactivated successfully.
Feb  1 09:51:38 np0005604375 podman[93118]: 2026-02-01 14:51:38.425676074 +0000 UTC m=+0.040822530 container create 888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Feb  1 09:51:38 np0005604375 systemd[1]: Started libpod-conmon-888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd.scope.
Feb  1 09:51:38 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c00f7ef882c34806d11ae7b1d0514e7ef6b69fd86e3df161083d1cde127a67d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c00f7ef882c34806d11ae7b1d0514e7ef6b69fd86e3df161083d1cde127a67d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c00f7ef882c34806d11ae7b1d0514e7ef6b69fd86e3df161083d1cde127a67d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c00f7ef882c34806d11ae7b1d0514e7ef6b69fd86e3df161083d1cde127a67d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:38 np0005604375 podman[93118]: 2026-02-01 14:51:38.498035737 +0000 UTC m=+0.113182163 container init 888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  1 09:51:38 np0005604375 podman[93118]: 2026-02-01 14:51:38.403086065 +0000 UTC m=+0.018232511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:38 np0005604375 podman[93118]: 2026-02-01 14:51:38.504947881 +0000 UTC m=+0.120094327 container start 888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:38 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 09:51:38 np0005604375 ceph-mgr[75469]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb  1 09:51:38 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb  1 09:51:38 np0005604375 podman[93118]: 2026-02-01 14:51:38.509095934 +0000 UTC m=+0.124242370 container attach 888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  1 09:51:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  1 09:51:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:38 np0005604375 cool_thompson[93052]: Scheduled mds.cephfs update...
Feb  1 09:51:38 np0005604375 systemd[1]: libpod-e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24.scope: Deactivated successfully.
Feb  1 09:51:38 np0005604375 podman[93026]: 2026-02-01 14:51:38.52888144 +0000 UTC m=+0.543337242 container died e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24 (image=quay.io/ceph/ceph:v20, name=cool_thompson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb  1 09:51:38 np0005604375 podman[93026]: 2026-02-01 14:51:38.570542764 +0000 UTC m=+0.584998516 container remove e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24 (image=quay.io/ceph/ceph:v20, name=cool_thompson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:38 np0005604375 systemd[1]: libpod-conmon-e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24.scope: Deactivated successfully.
Feb  1 09:51:38 np0005604375 ceph-mon[75179]: Saving service mds.cephfs spec with placement compute-0
Feb  1 09:51:38 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:38 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:38 np0005604375 systemd[1]: var-lib-containers-storage-overlay-35fe62f2f994ae68bdbe3a1643b3429133b37ca57a2ec3b6d272fc2dcbb76727-merged.mount: Deactivated successfully.
Feb  1 09:51:39 np0005604375 lvm[93228]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:51:39 np0005604375 lvm[93229]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:51:39 np0005604375 lvm[93228]: VG ceph_vg0 finished
Feb  1 09:51:39 np0005604375 lvm[93229]: VG ceph_vg1 finished
Feb  1 09:51:39 np0005604375 lvm[93231]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:51:39 np0005604375 lvm[93231]: VG ceph_vg2 finished
Feb  1 09:51:39 np0005604375 vigilant_babbage[93135]: {}
Feb  1 09:51:39 np0005604375 systemd[1]: libpod-888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd.scope: Deactivated successfully.
Feb  1 09:51:39 np0005604375 systemd[1]: libpod-888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd.scope: Consumed 1.044s CPU time.
Feb  1 09:51:39 np0005604375 podman[93118]: 2026-02-01 14:51:39.309767535 +0000 UTC m=+0.924913951 container died 888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:39 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c00f7ef882c34806d11ae7b1d0514e7ef6b69fd86e3df161083d1cde127a67d7-merged.mount: Deactivated successfully.
Feb  1 09:51:39 np0005604375 podman[93118]: 2026-02-01 14:51:39.585101819 +0000 UTC m=+1.200248225 container remove 888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 09:51:39 np0005604375 python3[93317]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  1 09:51:39 np0005604375 systemd[1]: libpod-conmon-888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd.scope: Deactivated successfully.
Feb  1 09:51:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:39 np0005604375 ceph-mon[75179]: Saving service mds.cephfs spec with placement compute-0
Feb  1 09:51:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:39 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:39 np0005604375 python3[93446]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957499.208748-36618-277888170047934/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=9e80b5c3ad70771b2808c3ea209191214d8953f2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:51:40 np0005604375 podman[93541]: 2026-02-01 14:51:40.263938733 +0000 UTC m=+0.078203887 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 09:51:40 np0005604375 podman[93541]: 2026-02-01 14:51:40.359501203 +0000 UTC m=+0.173766337 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:51:40 np0005604375 python3[93586]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:40 np0005604375 podman[93613]: 2026-02-01 14:51:40.518505912 +0000 UTC m=+0.045318794 container create d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5 (image=quay.io/ceph/ceph:v20, name=sweet_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:40 np0005604375 systemd[1]: Started libpod-conmon-d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5.scope.
Feb  1 09:51:40 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fef96d11df7d1d9982a99c5023b183d696e278b6c2bd236093b5807676574a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fef96d11df7d1d9982a99c5023b183d696e278b6c2bd236093b5807676574a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:40 np0005604375 podman[93613]: 2026-02-01 14:51:40.499819158 +0000 UTC m=+0.026632070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:40 np0005604375 podman[93613]: 2026-02-01 14:51:40.608422554 +0000 UTC m=+0.135235586 container init d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5 (image=quay.io/ceph/ceph:v20, name=sweet_faraday, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:40 np0005604375 podman[93613]: 2026-02-01 14:51:40.614470403 +0000 UTC m=+0.141283275 container start d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5 (image=quay.io/ceph/ceph:v20, name=sweet_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  1 09:51:40 np0005604375 podman[93613]: 2026-02-01 14:51:40.617782482 +0000 UTC m=+0.144595424 container attach d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5 (image=quay.io/ceph/ceph:v20, name=sweet_faraday, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  1 09:51:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:51:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:51:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:51:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/437174073' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/437174073' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb  1 09:51:41 np0005604375 systemd[1]: libpod-d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5.scope: Deactivated successfully.
Feb  1 09:51:41 np0005604375 podman[93613]: 2026-02-01 14:51:41.144640314 +0000 UTC m=+0.671453196 container died d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5 (image=quay.io/ceph/ceph:v20, name=sweet_faraday, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:41 np0005604375 systemd[1]: var-lib-containers-storage-overlay-7fef96d11df7d1d9982a99c5023b183d696e278b6c2bd236093b5807676574a3-merged.mount: Deactivated successfully.
Feb  1 09:51:41 np0005604375 podman[93613]: 2026-02-01 14:51:41.180678022 +0000 UTC m=+0.707490904 container remove d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5 (image=quay.io/ceph/ceph:v20, name=sweet_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  1 09:51:41 np0005604375 systemd[1]: libpod-conmon-d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5.scope: Deactivated successfully.
Feb  1 09:51:41 np0005604375 podman[93826]: 2026-02-01 14:51:41.462049784 +0000 UTC m=+0.065034967 container create de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  1 09:51:41 np0005604375 systemd[1]: Started libpod-conmon-de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816.scope.
Feb  1 09:51:41 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:41 np0005604375 podman[93826]: 2026-02-01 14:51:41.433406386 +0000 UTC m=+0.036391609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:41 np0005604375 podman[93826]: 2026-02-01 14:51:41.540826767 +0000 UTC m=+0.143811930 container init de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bassi, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  1 09:51:41 np0005604375 podman[93826]: 2026-02-01 14:51:41.549284918 +0000 UTC m=+0.152270091 container start de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:41 np0005604375 podman[93826]: 2026-02-01 14:51:41.553901445 +0000 UTC m=+0.156886608 container attach de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:41 np0005604375 objective_bassi[93843]: 167 167
Feb  1 09:51:41 np0005604375 systemd[1]: libpod-de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816.scope: Deactivated successfully.
Feb  1 09:51:41 np0005604375 podman[93826]: 2026-02-01 14:51:41.557226693 +0000 UTC m=+0.160211856 container died de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bassi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:41 np0005604375 systemd[1]: var-lib-containers-storage-overlay-66d6fbd0a98567dbc88615da73ba550ce8f812b67b11c9df08504d5028bff6ec-merged.mount: Deactivated successfully.
Feb  1 09:51:41 np0005604375 podman[93826]: 2026-02-01 14:51:41.592886339 +0000 UTC m=+0.195871492 container remove de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bassi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  1 09:51:41 np0005604375 systemd[1]: libpod-conmon-de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816.scope: Deactivated successfully.
Feb  1 09:51:41 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:41 np0005604375 podman[93866]: 2026-02-01 14:51:41.790225123 +0000 UTC m=+0.064045268 container create 7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_knuth, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 09:51:41 np0005604375 systemd[1]: Started libpod-conmon-7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656.scope.
Feb  1 09:51:41 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaeea464b81460ef3f1a460110c99489bf36ad9d72ef90c13a603743d6210b88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaeea464b81460ef3f1a460110c99489bf36ad9d72ef90c13a603743d6210b88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaeea464b81460ef3f1a460110c99489bf36ad9d72ef90c13a603743d6210b88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:41 np0005604375 podman[93866]: 2026-02-01 14:51:41.767869811 +0000 UTC m=+0.041689986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaeea464b81460ef3f1a460110c99489bf36ad9d72ef90c13a603743d6210b88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaeea464b81460ef3f1a460110c99489bf36ad9d72ef90c13a603743d6210b88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:41 np0005604375 podman[93866]: 2026-02-01 14:51:41.904995512 +0000 UTC m=+0.178815687 container init 7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 09:51:41 np0005604375 podman[93866]: 2026-02-01 14:51:41.912642578 +0000 UTC m=+0.186462713 container start 7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_knuth, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:41 np0005604375 podman[93866]: 2026-02-01 14:51:41.916531144 +0000 UTC m=+0.190351309 container attach 7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_knuth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  1 09:51:41 np0005604375 python3[93905]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/437174073' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb  1 09:51:41 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/437174073' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb  1 09:51:42 np0005604375 podman[93914]: 2026-02-01 14:51:42.057510309 +0000 UTC m=+0.085603016 container create bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab (image=quay.io/ceph/ceph:v20, name=fervent_bardeen, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:42 np0005604375 systemd[1]: Started libpod-conmon-bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab.scope.
Feb  1 09:51:42 np0005604375 podman[93914]: 2026-02-01 14:51:42.013216097 +0000 UTC m=+0.041308864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:42 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:42 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ff9d7db27bb552feff2d397ff0875b5086ba12ff552328973a377796ca71e3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:42 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ff9d7db27bb552feff2d397ff0875b5086ba12ff552328973a377796ca71e3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:42 np0005604375 podman[93914]: 2026-02-01 14:51:42.152650546 +0000 UTC m=+0.180743263 container init bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab (image=quay.io/ceph/ceph:v20, name=fervent_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Feb  1 09:51:42 np0005604375 podman[93914]: 2026-02-01 14:51:42.162748455 +0000 UTC m=+0.190841122 container start bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab (image=quay.io/ceph/ceph:v20, name=fervent_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:42 np0005604375 podman[93914]: 2026-02-01 14:51:42.165851447 +0000 UTC m=+0.193944124 container attach bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab (image=quay.io/ceph/ceph:v20, name=fervent_bardeen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  1 09:51:42 np0005604375 gallant_knuth[93908]: --> passed data devices: 0 physical, 3 LVM
Feb  1 09:51:42 np0005604375 gallant_knuth[93908]: --> All data devices are unavailable
Feb  1 09:51:42 np0005604375 systemd[1]: libpod-7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656.scope: Deactivated successfully.
Feb  1 09:51:42 np0005604375 conmon[93908]: conmon 7dd015145bfa3b08c73e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656.scope/container/memory.events
Feb  1 09:51:42 np0005604375 podman[93866]: 2026-02-01 14:51:42.456230206 +0000 UTC m=+0.730050341 container died 7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Feb  1 09:51:42 np0005604375 systemd[1]: var-lib-containers-storage-overlay-aaeea464b81460ef3f1a460110c99489bf36ad9d72ef90c13a603743d6210b88-merged.mount: Deactivated successfully.
Feb  1 09:51:42 np0005604375 podman[93866]: 2026-02-01 14:51:42.507947977 +0000 UTC m=+0.781768132 container remove 7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  1 09:51:42 np0005604375 systemd[1]: libpod-conmon-7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656.scope: Deactivated successfully.
Feb  1 09:51:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  1 09:51:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2690848245' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb  1 09:51:42 np0005604375 fervent_bardeen[93932]: 
Feb  1 09:51:42 np0005604375 fervent_bardeen[93932]: {"fsid":"2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":102,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":29,"num_osds":3,"num_up_osds":3,"osd_up_since":1769957475,"num_in_osds":3,"osd_in_since":1769957454,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83808256,"bytes_avail":64328118272,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-02-01T14:51:37:585930+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-01T14:51:19.699816+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Feb  1 09:51:42 np0005604375 systemd[1]: libpod-bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab.scope: Deactivated successfully.
Feb  1 09:51:42 np0005604375 podman[93914]: 2026-02-01 14:51:42.755916901 +0000 UTC m=+0.784009568 container died bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab (image=quay.io/ceph/ceph:v20, name=fervent_bardeen, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 09:51:42 np0005604375 systemd[1]: var-lib-containers-storage-overlay-75ff9d7db27bb552feff2d397ff0875b5086ba12ff552328973a377796ca71e3-merged.mount: Deactivated successfully.
Feb  1 09:51:42 np0005604375 podman[93914]: 2026-02-01 14:51:42.808520549 +0000 UTC m=+0.836613246 container remove bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab (image=quay.io/ceph/ceph:v20, name=fervent_bardeen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  1 09:51:42 np0005604375 systemd[1]: libpod-conmon-bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab.scope: Deactivated successfully.
Feb  1 09:51:43 np0005604375 podman[94081]: 2026-02-01 14:51:43.067613552 +0000 UTC m=+0.062029508 container create 0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:43 np0005604375 systemd[1]: Started libpod-conmon-0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976.scope.
Feb  1 09:51:43 np0005604375 podman[94081]: 2026-02-01 14:51:43.040160579 +0000 UTC m=+0.034576565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:43 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:43 np0005604375 podman[94081]: 2026-02-01 14:51:43.153576608 +0000 UTC m=+0.147992604 container init 0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:43 np0005604375 podman[94081]: 2026-02-01 14:51:43.15907694 +0000 UTC m=+0.153492876 container start 0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  1 09:51:43 np0005604375 podman[94081]: 2026-02-01 14:51:43.162611585 +0000 UTC m=+0.157027591 container attach 0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:43 np0005604375 python3[94089]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:43 np0005604375 happy_faraday[94099]: 167 167
Feb  1 09:51:43 np0005604375 systemd[1]: libpod-0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976.scope: Deactivated successfully.
Feb  1 09:51:43 np0005604375 podman[94081]: 2026-02-01 14:51:43.166080318 +0000 UTC m=+0.160496274 container died 0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:43 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a9be3257d2caaeeb741c89502401de8f38842c12712a1672a4b93945142a68b0-merged.mount: Deactivated successfully.
Feb  1 09:51:43 np0005604375 podman[94081]: 2026-02-01 14:51:43.210226035 +0000 UTC m=+0.204642011 container remove 0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  1 09:51:43 np0005604375 systemd[1]: libpod-conmon-0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976.scope: Deactivated successfully.
Feb  1 09:51:43 np0005604375 podman[94105]: 2026-02-01 14:51:43.234748531 +0000 UTC m=+0.051962519 container create bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb (image=quay.io/ceph/ceph:v20, name=reverent_joliot, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:43 np0005604375 systemd[1]: Started libpod-conmon-bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb.scope.
Feb  1 09:51:43 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:43 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3227db181a82b92088e9b0f3bc712d6623639b82e3193d9cf7910c151241859/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:43 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3227db181a82b92088e9b0f3bc712d6623639b82e3193d9cf7910c151241859/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:43 np0005604375 podman[94105]: 2026-02-01 14:51:43.300834078 +0000 UTC m=+0.118048086 container init bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb (image=quay.io/ceph/ceph:v20, name=reverent_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  1 09:51:43 np0005604375 podman[94105]: 2026-02-01 14:51:43.305562449 +0000 UTC m=+0.122776447 container start bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb (image=quay.io/ceph/ceph:v20, name=reverent_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  1 09:51:43 np0005604375 podman[94105]: 2026-02-01 14:51:43.309056682 +0000 UTC m=+0.126270700 container attach bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb (image=quay.io/ceph/ceph:v20, name=reverent_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  1 09:51:43 np0005604375 podman[94105]: 2026-02-01 14:51:43.218571532 +0000 UTC m=+0.035785540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:43 np0005604375 podman[94144]: 2026-02-01 14:51:43.377843089 +0000 UTC m=+0.045282422 container create 0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb  1 09:51:43 np0005604375 systemd[1]: Started libpod-conmon-0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e.scope.
Feb  1 09:51:43 np0005604375 podman[94144]: 2026-02-01 14:51:43.35897286 +0000 UTC m=+0.026412233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:43 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:43 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d650f810cd9aaeb7f2cd14e21ebec59051c318f7a57c05fe748fbeb75e93b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:43 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d650f810cd9aaeb7f2cd14e21ebec59051c318f7a57c05fe748fbeb75e93b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:43 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d650f810cd9aaeb7f2cd14e21ebec59051c318f7a57c05fe748fbeb75e93b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:43 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d650f810cd9aaeb7f2cd14e21ebec59051c318f7a57c05fe748fbeb75e93b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:43 np0005604375 podman[94144]: 2026-02-01 14:51:43.478781158 +0000 UTC m=+0.146220531 container init 0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 09:51:43 np0005604375 podman[94144]: 2026-02-01 14:51:43.487444005 +0000 UTC m=+0.154883338 container start 0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  1 09:51:43 np0005604375 podman[94144]: 2026-02-01 14:51:43.491759883 +0000 UTC m=+0.159199176 container attach 0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hellman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  1 09:51:43 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]: {
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:    "0": [
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:        {
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "devices": [
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "/dev/loop3"
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            ],
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_name": "ceph_lv0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_size": "21470642176",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "name": "ceph_lv0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "tags": {
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.crush_device_class": "",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.encrypted": "0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.osd_id": "0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.type": "block",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.vdo": "0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.with_tpm": "0"
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            },
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "type": "block",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "vg_name": "ceph_vg0"
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:        }
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:    ],
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:    "1": [
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:        {
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "devices": [
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "/dev/loop4"
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            ],
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_name": "ceph_lv1",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_size": "21470642176",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "name": "ceph_lv1",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "tags": {
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.crush_device_class": "",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.encrypted": "0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.osd_id": "1",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.type": "block",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.vdo": "0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.with_tpm": "0"
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            },
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "type": "block",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "vg_name": "ceph_vg1"
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:        }
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:    ],
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:    "2": [
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:        {
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "devices": [
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "/dev/loop5"
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            ],
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_name": "ceph_lv2",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_size": "21470642176",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "name": "ceph_lv2",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "tags": {
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.crush_device_class": "",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.encrypted": "0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.osd_id": "2",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.type": "block",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.vdo": "0",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:                "ceph.with_tpm": "0"
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            },
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "type": "block",
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:            "vg_name": "ceph_vg2"
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:        }
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]:    ]
Feb  1 09:51:43 np0005604375 compassionate_hellman[94178]: }
Feb  1 09:51:43 np0005604375 systemd[1]: libpod-0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e.scope: Deactivated successfully.
Feb  1 09:51:43 np0005604375 podman[94144]: 2026-02-01 14:51:43.773792215 +0000 UTC m=+0.441231508 container died 0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Feb  1 09:51:43 np0005604375 systemd[1]: var-lib-containers-storage-overlay-81d650f810cd9aaeb7f2cd14e21ebec59051c318f7a57c05fe748fbeb75e93b9-merged.mount: Deactivated successfully.
Feb  1 09:51:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 09:51:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243915552' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 09:51:43 np0005604375 podman[94144]: 2026-02-01 14:51:43.821871849 +0000 UTC m=+0.489311152 container remove 0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  1 09:51:43 np0005604375 reverent_joliot[94135]: 
Feb  1 09:51:43 np0005604375 reverent_joliot[94135]: {"epoch":1,"fsid":"2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f","modified":"2026-02-01T14:49:56.174590Z","created":"2026-02-01T14:49:56.174590Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Feb  1 09:51:43 np0005604375 reverent_joliot[94135]: dumped monmap epoch 1
Feb  1 09:51:43 np0005604375 systemd[1]: libpod-conmon-0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e.scope: Deactivated successfully.
Feb  1 09:51:43 np0005604375 systemd[1]: libpod-bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb.scope: Deactivated successfully.
Feb  1 09:51:43 np0005604375 podman[94105]: 2026-02-01 14:51:43.844051436 +0000 UTC m=+0.661265424 container died bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb (image=quay.io/ceph/ceph:v20, name=reverent_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:43 np0005604375 systemd[1]: var-lib-containers-storage-overlay-d3227db181a82b92088e9b0f3bc712d6623639b82e3193d9cf7910c151241859-merged.mount: Deactivated successfully.
Feb  1 09:51:43 np0005604375 podman[94105]: 2026-02-01 14:51:43.881087283 +0000 UTC m=+0.698301311 container remove bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb (image=quay.io/ceph/ceph:v20, name=reverent_joliot, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  1 09:51:43 np0005604375 systemd[1]: libpod-conmon-bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb.scope: Deactivated successfully.
Feb  1 09:51:44 np0005604375 podman[94302]: 2026-02-01 14:51:44.297486014 +0000 UTC m=+0.044444737 container create 5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  1 09:51:44 np0005604375 systemd[1]: Started libpod-conmon-5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8.scope.
Feb  1 09:51:44 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:44 np0005604375 podman[94302]: 2026-02-01 14:51:44.371658541 +0000 UTC m=+0.118617324 container init 5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jennings, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  1 09:51:44 np0005604375 podman[94302]: 2026-02-01 14:51:44.277746669 +0000 UTC m=+0.024705452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:44 np0005604375 podman[94302]: 2026-02-01 14:51:44.379012378 +0000 UTC m=+0.125971131 container start 5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  1 09:51:44 np0005604375 brave_jennings[94318]: 167 167
Feb  1 09:51:44 np0005604375 systemd[1]: libpod-5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8.scope: Deactivated successfully.
Feb  1 09:51:44 np0005604375 podman[94302]: 2026-02-01 14:51:44.384954894 +0000 UTC m=+0.131913657 container attach 5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  1 09:51:44 np0005604375 podman[94302]: 2026-02-01 14:51:44.385383377 +0000 UTC m=+0.132342140 container died 5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jennings, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:44 np0005604375 python3[94301]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:44 np0005604375 systemd[1]: var-lib-containers-storage-overlay-18e8a0603801f0e2d51b3942ad64928cd50f876ed6fe4c371abdd9f91e38f095-merged.mount: Deactivated successfully.
Feb  1 09:51:44 np0005604375 podman[94302]: 2026-02-01 14:51:44.421590659 +0000 UTC m=+0.168549422 container remove 5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jennings, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  1 09:51:44 np0005604375 systemd[1]: libpod-conmon-5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8.scope: Deactivated successfully.
Feb  1 09:51:44 np0005604375 podman[94331]: 2026-02-01 14:51:44.493153739 +0000 UTC m=+0.068766198 container create 1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1 (image=quay.io/ceph/ceph:v20, name=awesome_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 09:51:44 np0005604375 systemd[1]: Started libpod-conmon-1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1.scope.
Feb  1 09:51:44 np0005604375 podman[94331]: 2026-02-01 14:51:44.470348103 +0000 UTC m=+0.045960612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:44 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46126b5fc918f374e7f6e31b19d986a78afcad6bf30dfd513589806c18af886/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46126b5fc918f374e7f6e31b19d986a78afcad6bf30dfd513589806c18af886/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:44 np0005604375 podman[94356]: 2026-02-01 14:51:44.573437156 +0000 UTC m=+0.049937620 container create 053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Feb  1 09:51:44 np0005604375 podman[94331]: 2026-02-01 14:51:44.589578294 +0000 UTC m=+0.165190763 container init 1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1 (image=quay.io/ceph/ceph:v20, name=awesome_albattani, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  1 09:51:44 np0005604375 podman[94331]: 2026-02-01 14:51:44.5968656 +0000 UTC m=+0.172478069 container start 1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1 (image=quay.io/ceph/ceph:v20, name=awesome_albattani, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb  1 09:51:44 np0005604375 podman[94331]: 2026-02-01 14:51:44.600061435 +0000 UTC m=+0.175673904 container attach 1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1 (image=quay.io/ceph/ceph:v20, name=awesome_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  1 09:51:44 np0005604375 systemd[1]: Started libpod-conmon-053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35.scope.
Feb  1 09:51:44 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3a0627f36aabec1e1400747937a2c29c7ab21ad6884262c860b8ab3a0516fb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:44 np0005604375 podman[94356]: 2026-02-01 14:51:44.548632792 +0000 UTC m=+0.025133306 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3a0627f36aabec1e1400747937a2c29c7ab21ad6884262c860b8ab3a0516fb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3a0627f36aabec1e1400747937a2c29c7ab21ad6884262c860b8ab3a0516fb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3a0627f36aabec1e1400747937a2c29c7ab21ad6884262c860b8ab3a0516fb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:44 np0005604375 podman[94356]: 2026-02-01 14:51:44.659988589 +0000 UTC m=+0.136489073 container init 053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb  1 09:51:44 np0005604375 podman[94356]: 2026-02-01 14:51:44.665658627 +0000 UTC m=+0.142159101 container start 053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 09:51:44 np0005604375 podman[94356]: 2026-02-01 14:51:44.668851942 +0000 UTC m=+0.145352416 container attach 053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1656115226' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb  1 09:51:45 np0005604375 awesome_albattani[94372]: [client.openstack]
Feb  1 09:51:45 np0005604375 awesome_albattani[94372]: #011key = AQD1Z39pAAAAABAAx9bXBCrv3oQqUCtEn4NgxQ==
Feb  1 09:51:45 np0005604375 awesome_albattani[94372]: #011caps mgr = "allow *"
Feb  1 09:51:45 np0005604375 awesome_albattani[94372]: #011caps mon = "profile rbd"
Feb  1 09:51:45 np0005604375 awesome_albattani[94372]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Feb  1 09:51:45 np0005604375 systemd[1]: libpod-1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1.scope: Deactivated successfully.
Feb  1 09:51:45 np0005604375 podman[94331]: 2026-02-01 14:51:45.122661921 +0000 UTC m=+0.698274380 container died 1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1 (image=quay.io/ceph/ceph:v20, name=awesome_albattani, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:45 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c46126b5fc918f374e7f6e31b19d986a78afcad6bf30dfd513589806c18af886-merged.mount: Deactivated successfully.
Feb  1 09:51:45 np0005604375 podman[94331]: 2026-02-01 14:51:45.156171374 +0000 UTC m=+0.731783833 container remove 1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1 (image=quay.io/ceph/ceph:v20, name=awesome_albattani, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:45 np0005604375 systemd[1]: libpod-conmon-1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1.scope: Deactivated successfully.
Feb  1 09:51:45 np0005604375 lvm[94490]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:51:45 np0005604375 lvm[94490]: VG ceph_vg1 finished
Feb  1 09:51:45 np0005604375 lvm[94489]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:51:45 np0005604375 lvm[94489]: VG ceph_vg0 finished
Feb  1 09:51:45 np0005604375 lvm[94492]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:51:45 np0005604375 lvm[94492]: VG ceph_vg2 finished
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:51:45 np0005604375 zen_rubin[94379]: {}
Feb  1 09:51:45 np0005604375 podman[94356]: 2026-02-01 14:51:45.463948248 +0000 UTC m=+0.940448722 container died 053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Feb  1 09:51:45 np0005604375 systemd[1]: libpod-053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35.scope: Deactivated successfully.
Feb  1 09:51:45 np0005604375 systemd[1]: libpod-053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35.scope: Consumed 1.270s CPU time.
Feb  1 09:51:45 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c3a0627f36aabec1e1400747937a2c29c7ab21ad6884262c860b8ab3a0516fb1-merged.mount: Deactivated successfully.
Feb  1 09:51:45 np0005604375 podman[94356]: 2026-02-01 14:51:45.514380422 +0000 UTC m=+0.990880866 container remove 053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:45 np0005604375 systemd[1]: libpod-conmon-053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35.scope: Deactivated successfully.
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:45 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev 886d3577-b177-476c-87ab-959186f1d739 (Updating rgw.rgw deployment (+1 -> 1))
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.eusbkm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.eusbkm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.eusbkm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:45 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.eusbkm on compute-0
Feb  1 09:51:45 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.eusbkm on compute-0
Feb  1 09:51:45 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:46 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/1656115226' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb  1 09:51:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.eusbkm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb  1 09:51:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.eusbkm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  1 09:51:46 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:46 np0005604375 podman[94602]: 2026-02-01 14:51:46.121946788 +0000 UTC m=+0.046579304 container create f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_buck, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:46 np0005604375 systemd[1]: Started libpod-conmon-f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff.scope.
Feb  1 09:51:46 np0005604375 podman[94602]: 2026-02-01 14:51:46.095280167 +0000 UTC m=+0.019912693 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:46 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:46 np0005604375 podman[94602]: 2026-02-01 14:51:46.228685637 +0000 UTC m=+0.153318163 container init f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_buck, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:46 np0005604375 podman[94602]: 2026-02-01 14:51:46.237830695 +0000 UTC m=+0.162463221 container start f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_buck, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 09:51:46 np0005604375 podman[94602]: 2026-02-01 14:51:46.242034713 +0000 UTC m=+0.166667239 container attach f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  1 09:51:46 np0005604375 eager_buck[94664]: 167 167
Feb  1 09:51:46 np0005604375 systemd[1]: libpod-f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff.scope: Deactivated successfully.
Feb  1 09:51:46 np0005604375 podman[94602]: 2026-02-01 14:51:46.245151971 +0000 UTC m=+0.169784497 container died f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  1 09:51:46 np0005604375 systemd[1]: var-lib-containers-storage-overlay-95e0fe7b94bb83bc94b4c1c35c5eaddf9090fb14fab6b13f63bf11d8935b552b-merged.mount: Deactivated successfully.
Feb  1 09:51:46 np0005604375 podman[94602]: 2026-02-01 14:51:46.288653048 +0000 UTC m=+0.213285584 container remove f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_buck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 09:51:46 np0005604375 systemd[1]: libpod-conmon-f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff.scope: Deactivated successfully.
Feb  1 09:51:46 np0005604375 systemd[1]: Reloading.
Feb  1 09:51:46 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:51:46 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:51:46 np0005604375 systemd[1]: Reloading.
Feb  1 09:51:46 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:51:46 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:51:46 np0005604375 systemd[1]: Starting Ceph rgw.rgw.compute-0.eusbkm for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: Deploying daemon rgw.rgw.compute-0.eusbkm on compute-0
Feb  1 09:51:47 np0005604375 ansible-async_wrapper.py[94857]: Invoked with j602250558024 30 /home/zuul/.ansible/tmp/ansible-tmp-1769957506.097269-36690-130780346629463/AnsiballZ_command.py _
Feb  1 09:51:47 np0005604375 ansible-async_wrapper.py[94886]: Starting module and watcher
Feb  1 09:51:47 np0005604375 ansible-async_wrapper.py[94886]: Start watching 94887 (30)
Feb  1 09:51:47 np0005604375 ansible-async_wrapper.py[94887]: Start module (94887)
Feb  1 09:51:47 np0005604375 ansible-async_wrapper.py[94857]: Return async_wrapper task started.
Feb  1 09:51:47 np0005604375 podman[94910]: 2026-02-01 14:51:47.225013853 +0000 UTC m=+0.058651304 container create 5a12c18d2f7906fde337c42c6cd3f20ec687c5a4d1e621135f618856254cf060 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-rgw-rgw-compute-0-eusbkm, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  1 09:51:47 np0005604375 python3[94893]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488b4c4102f558e4ecfce0447c4bd5716e7fadcc70bd8d2d40cf2d9e5e2b6d6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488b4c4102f558e4ecfce0447c4bd5716e7fadcc70bd8d2d40cf2d9e5e2b6d6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488b4c4102f558e4ecfce0447c4bd5716e7fadcc70bd8d2d40cf2d9e5e2b6d6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488b4c4102f558e4ecfce0447c4bd5716e7fadcc70bd8d2d40cf2d9e5e2b6d6f/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.eusbkm supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:47 np0005604375 podman[94910]: 2026-02-01 14:51:47.281661 +0000 UTC m=+0.115298491 container init 5a12c18d2f7906fde337c42c6cd3f20ec687c5a4d1e621135f618856254cf060 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-rgw-rgw-compute-0-eusbkm, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  1 09:51:47 np0005604375 podman[94910]: 2026-02-01 14:51:47.291265341 +0000 UTC m=+0.124902802 container start 5a12c18d2f7906fde337c42c6cd3f20ec687c5a4d1e621135f618856254cf060 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-rgw-rgw-compute-0-eusbkm, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  1 09:51:47 np0005604375 bash[94910]: 5a12c18d2f7906fde337c42c6cd3f20ec687c5a4d1e621135f618856254cf060
Feb  1 09:51:47 np0005604375 podman[94910]: 2026-02-01 14:51:47.204845545 +0000 UTC m=+0.038483026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:47 np0005604375 systemd[1]: Started Ceph rgw.rgw.compute-0.eusbkm for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:51:47 np0005604375 podman[94926]: 2026-02-01 14:51:47.351390146 +0000 UTC m=+0.076846428 container create ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a (image=quay.io/ceph/ceph:v20, name=naughty_swirles, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb  1 09:51:47 np0005604375 radosgw[94941]: deferred set uid:gid to 167:167 (ceph:ceph)
Feb  1 09:51:47 np0005604375 radosgw[94941]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Feb  1 09:51:47 np0005604375 radosgw[94941]: framework: beast
Feb  1 09:51:47 np0005604375 radosgw[94941]: framework conf key: endpoint, val: 192.168.122.100:8082
Feb  1 09:51:47 np0005604375 radosgw[94941]: init_numa not setting numa affinity
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:47 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev 886d3577-b177-476c-87ab-959186f1d739 (Updating rgw.rgw deployment (+1 -> 1))
Feb  1 09:51:47 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event 886d3577-b177-476c-87ab-959186f1d739 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Feb  1 09:51:47 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Feb  1 09:51:47 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  1 09:51:47 np0005604375 podman[94926]: 2026-02-01 14:51:47.310258856 +0000 UTC m=+0.035715168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:47 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev bf815f15-3eda-4b12-8174-8780c3db2bc7 (Updating mds.cephfs deployment (+1 -> 1))
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.agpbju", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.agpbju", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb  1 09:51:47 np0005604375 systemd[1]: Started libpod-conmon-ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a.scope.
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.agpbju", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:47 np0005604375 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.agpbju on compute-0
Feb  1 09:51:47 np0005604375 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.agpbju on compute-0
Feb  1 09:51:47 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987d948f92e62626acc8b81262c567a5a717a68d6b02f39e496dae37a52d1987/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987d948f92e62626acc8b81262c567a5a717a68d6b02f39e496dae37a52d1987/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:47 np0005604375 podman[94926]: 2026-02-01 14:51:47.459289437 +0000 UTC m=+0.184745689 container init ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a (image=quay.io/ceph/ceph:v20, name=naughty_swirles, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  1 09:51:47 np0005604375 podman[94926]: 2026-02-01 14:51:47.466874491 +0000 UTC m=+0.192330733 container start ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a (image=quay.io/ceph/ceph:v20, name=naughty_swirles, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:47 np0005604375 podman[94926]: 2026-02-01 14:51:47.471186453 +0000 UTC m=+0.196642725 container attach ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a (image=quay.io/ceph/ceph:v20, name=naughty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:47 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:47 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  1 09:51:47 np0005604375 naughty_swirles[94974]: 
Feb  1 09:51:47 np0005604375 naughty_swirles[94974]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  1 09:51:47 np0005604375 systemd[1]: libpod-ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a.scope: Deactivated successfully.
Feb  1 09:51:47 np0005604375 podman[94926]: 2026-02-01 14:51:47.890502613 +0000 UTC m=+0.615958935 container died ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a (image=quay.io/ceph/ceph:v20, name=naughty_swirles, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  1 09:51:47 np0005604375 systemd[1]: var-lib-containers-storage-overlay-987d948f92e62626acc8b81262c567a5a717a68d6b02f39e496dae37a52d1987-merged.mount: Deactivated successfully.
Feb  1 09:51:47 np0005604375 podman[94926]: 2026-02-01 14:51:47.955333731 +0000 UTC m=+0.680789973 container remove ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a (image=quay.io/ceph/ceph:v20, name=naughty_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:47 np0005604375 systemd[1]: libpod-conmon-ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a.scope: Deactivated successfully.
Feb  1 09:51:47 np0005604375 ansible-async_wrapper.py[94887]: Module complete (94887)
Feb  1 09:51:47 np0005604375 podman[95102]: 2026-02-01 14:51:47.992715924 +0000 UTC m=+0.048124817 container create aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  1 09:51:48 np0005604375 systemd[1]: Started libpod-conmon-aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817.scope.
Feb  1 09:51:48 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:48 np0005604375 podman[95102]: 2026-02-01 14:51:47.973149823 +0000 UTC m=+0.028558696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:48 np0005604375 podman[95102]: 2026-02-01 14:51:48.076000632 +0000 UTC m=+0.131409595 container init aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:48 np0005604375 podman[95102]: 2026-02-01 14:51:48.085245073 +0000 UTC m=+0.140653926 container start aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:48 np0005604375 elegant_shannon[95122]: 167 167
Feb  1 09:51:48 np0005604375 systemd[1]: libpod-aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817.scope: Deactivated successfully.
Feb  1 09:51:48 np0005604375 podman[95102]: 2026-02-01 14:51:48.090726127 +0000 UTC m=+0.146134980 container attach aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:48 np0005604375 podman[95102]: 2026-02-01 14:51:48.091096408 +0000 UTC m=+0.146505261 container died aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:48 np0005604375 systemd[1]: var-lib-containers-storage-overlay-28ee7a4e650181f7d374f53b6c5c14da4e7b84ed7302922e7ef36b3b7f44747b-merged.mount: Deactivated successfully.
Feb  1 09:51:48 np0005604375 podman[95102]: 2026-02-01 14:51:48.120370453 +0000 UTC m=+0.175779306 container remove aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  1 09:51:48 np0005604375 systemd[1]: libpod-conmon-aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817.scope: Deactivated successfully.
Feb  1 09:51:48 np0005604375 systemd[1]: Reloading.
Feb  1 09:51:48 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:51:48 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: Saving service rgw.rgw spec with placement compute-0
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.agpbju", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.agpbju", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: Deploying daemon mds.cephfs.compute-0.agpbju on compute-0
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1252107850' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb  1 09:51:48 np0005604375 systemd[1]: Reloading.
Feb  1 09:51:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:51:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:51:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:51:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:51:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:51:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:51:48 np0005604375 ceph-mgr[75469]: [progress INFO root] Writing back 4 completed events
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  1 09:51:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:48 np0005604375 ceph-mgr[75469]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Feb  1 09:51:48 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:51:48 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:51:48 np0005604375 python3[95227]: ansible-ansible.legacy.async_status Invoked with jid=j602250558024.94857 mode=status _async_dir=/root/.ansible_async
Feb  1 09:51:48 np0005604375 systemd[1]: Starting Ceph mds.cephfs.compute-0.agpbju for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb  1 09:51:48 np0005604375 python3[95335]: ansible-ansible.legacy.async_status Invoked with jid=j602250558024.94857 mode=cleanup _async_dir=/root/.ansible_async
Feb  1 09:51:48 np0005604375 podman[95363]: 2026-02-01 14:51:48.909737145 +0000 UTC m=+0.057927344 container create 7ea15bdd3bc5678f8ea492ec361549bccc66be4c197f27f7f341b6ace525728d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mds-cephfs-compute-0-agpbju, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  1 09:51:48 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12acec24b0927118a765720bd450011c6b0e040bf426934451e1e450a0d65aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:48 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12acec24b0927118a765720bd450011c6b0e040bf426934451e1e450a0d65aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:48 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12acec24b0927118a765720bd450011c6b0e040bf426934451e1e450a0d65aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:48 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12acec24b0927118a765720bd450011c6b0e040bf426934451e1e450a0d65aa/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.agpbju supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:48 np0005604375 podman[95363]: 2026-02-01 14:51:48.882797205 +0000 UTC m=+0.030987494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:48 np0005604375 podman[95363]: 2026-02-01 14:51:48.987876637 +0000 UTC m=+0.136066926 container init 7ea15bdd3bc5678f8ea492ec361549bccc66be4c197f27f7f341b6ace525728d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mds-cephfs-compute-0-agpbju, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  1 09:51:48 np0005604375 podman[95363]: 2026-02-01 14:51:48.994424042 +0000 UTC m=+0.142614281 container start 7ea15bdd3bc5678f8ea492ec361549bccc66be4c197f27f7f341b6ace525728d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mds-cephfs-compute-0-agpbju, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:48 np0005604375 bash[95363]: 7ea15bdd3bc5678f8ea492ec361549bccc66be4c197f27f7f341b6ace525728d
Feb  1 09:51:49 np0005604375 systemd[1]: Started Ceph mds.cephfs.compute-0.agpbju for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb  1 09:51:49 np0005604375 ceph-mds[95382]: set uid:gid to 167:167 (ceph:ceph)
Feb  1 09:51:49 np0005604375 ceph-mds[95382]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Feb  1 09:51:49 np0005604375 ceph-mds[95382]: main not setting numa affinity
Feb  1 09:51:49 np0005604375 ceph-mds[95382]: pidfile_write: ignore empty --pid-file
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:49 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mds-cephfs-compute-0-agpbju[95378]: starting mds.cephfs.compute-0.agpbju at 
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:49 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju Updating MDS map to version 2 from mon.0
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:49 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev bf815f15-3eda-4b12-8174-8780c3db2bc7 (Updating mds.cephfs deployment (+1 -> 1))
Feb  1 09:51:49 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event bf815f15-3eda-4b12-8174-8780c3db2bc7 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  1 09:51:49 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 30 pg[8.0( empty local-lis/les=0/0 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/1252107850' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1252107850' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Feb  1 09:51:49 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Feb  1 09:51:49 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 31 pg[8.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:51:49 np0005604375 python3[95501]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:49 np0005604375 podman[96035]: 2026-02-01 14:51:49.553207643 +0000 UTC m=+0.039086903 container create 8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2 (image=quay.io/ceph/ceph:v20, name=amazing_joliot, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:49 np0005604375 systemd[1]: Started libpod-conmon-8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2.scope.
Feb  1 09:51:49 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:49 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ca017340c87f17d380def908ee672e17846be7a8e74639f596390bd63ab46b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:49 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ca017340c87f17d380def908ee672e17846be7a8e74639f596390bd63ab46b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:49 np0005604375 podman[96035]: 2026-02-01 14:51:49.608776739 +0000 UTC m=+0.094656019 container init 8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2 (image=quay.io/ceph/ceph:v20, name=amazing_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 09:51:49 np0005604375 podman[96035]: 2026-02-01 14:51:49.615016965 +0000 UTC m=+0.100896225 container start 8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2 (image=quay.io/ceph/ceph:v20, name=amazing_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  1 09:51:49 np0005604375 podman[96035]: 2026-02-01 14:51:49.618610037 +0000 UTC m=+0.104489297 container attach 8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2 (image=quay.io/ceph/ceph:v20, name=amazing_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  1 09:51:49 np0005604375 podman[96035]: 2026-02-01 14:51:49.537242233 +0000 UTC m=+0.023121503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:49 np0005604375 podman[96127]: 2026-02-01 14:51:49.675985564 +0000 UTC m=+0.048987232 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  1 09:51:49 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v68: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:51:49 np0005604375 podman[96127]: 2026-02-01 14:51:49.857546912 +0000 UTC m=+0.230548580 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  1 09:51:50 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  1 09:51:50 np0005604375 amazing_joliot[96111]: 
Feb  1 09:51:50 np0005604375 amazing_joliot[96111]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  1 09:51:50 np0005604375 systemd[1]: libpod-8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2.scope: Deactivated successfully.
Feb  1 09:51:50 np0005604375 podman[96035]: 2026-02-01 14:51:50.033383189 +0000 UTC m=+0.519262449 container died 8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2 (image=quay.io/ceph/ceph:v20, name=amazing_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:50 np0005604375 systemd[1]: var-lib-containers-storage-overlay-d1ca017340c87f17d380def908ee672e17846be7a8e74639f596390bd63ab46b-merged.mount: Deactivated successfully.
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju Updating MDS map to version 3 from mon.0
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju Monitors have assigned me to become a standby
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).mds e3 new map
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-02-01T14:51:50:072446+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-01T14:51:37.585458+0000#012modified#0112026-02-01T14:51:37.585459+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.agpbju{-1:14253} state up:standby seq 1 addr [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] compat {c=[1],r=[1],i=[1fff]}]
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] up:boot
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] as mds.0
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.agpbju assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.agpbju"} v 0)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.agpbju"} : dispatch
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).mds e3 all = 0
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).mds e4 new map
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-02-01T14:51:50:079861+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-01T14:51:37.585458+0000#012modified#0112026-02-01T14:51:50.079856+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.agpbju{0:14253} state up:creating seq 1 addr [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.agpbju=up:creating}
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju Updating MDS map to version 4 from mon.0
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.4 handle_mds_map I am now mds.0.4
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x1
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x100
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x600
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x601
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x602
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x603
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x604
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x605
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x606
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x607
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x608
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x609
Feb  1 09:51:50 np0005604375 podman[96035]: 2026-02-01 14:51:50.104427391 +0000 UTC m=+0.590306661 container remove 8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2 (image=quay.io/ceph/ceph:v20, name=amazing_joliot, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 09:51:50 np0005604375 systemd[1]: libpod-conmon-8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2.scope: Deactivated successfully.
Feb  1 09:51:50 np0005604375 ceph-mds[95382]: mds.0.4 creating_done
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.agpbju is now active in filesystem cephfs as rank 0
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/1252107850' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: daemon mds.cephfs.compute-0.agpbju assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: Cluster is now healthy
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: daemon mds.cephfs.compute-0.agpbju is now active in filesystem cephfs as rank 0
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:50 np0005604375 python3[96415]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:50 np0005604375 podman[96433]: 2026-02-01 14:51:50.88160007 +0000 UTC m=+0.032550009 container create a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e (image=quay.io/ceph/ceph:v20, name=condescending_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:50 np0005604375 systemd[1]: Started libpod-conmon-a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e.scope.
Feb  1 09:51:50 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:50 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32990e18280e2ca3315ad8cc98f80d39132172514690ee000cc1e60c7e71be7a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:50 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32990e18280e2ca3315ad8cc98f80d39132172514690ee000cc1e60c7e71be7a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:50 np0005604375 podman[96433]: 2026-02-01 14:51:50.957036346 +0000 UTC m=+0.107986295 container init a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e (image=quay.io/ceph/ceph:v20, name=condescending_mclean, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:50 np0005604375 podman[96433]: 2026-02-01 14:51:50.961849792 +0000 UTC m=+0.112799731 container start a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e (image=quay.io/ceph/ceph:v20, name=condescending_mclean, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  1 09:51:50 np0005604375 podman[96433]: 2026-02-01 14:51:50.964775134 +0000 UTC m=+0.115725113 container attach a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e (image=quay.io/ceph/ceph:v20, name=condescending_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:50 np0005604375 podman[96433]: 2026-02-01 14:51:50.869027435 +0000 UTC m=+0.019977394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:50 np0005604375 podman[96463]: 2026-02-01 14:51:50.979659494 +0000 UTC m=+0.044193367 container create 6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_elion, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:51 np0005604375 systemd[1]: Started libpod-conmon-6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b.scope.
Feb  1 09:51:51 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:51 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 32 pg[9.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:51:51 np0005604375 podman[96463]: 2026-02-01 14:51:51.047854456 +0000 UTC m=+0.112388379 container init 6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_elion, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:51 np0005604375 podman[96463]: 2026-02-01 14:51:51.052200789 +0000 UTC m=+0.116734622 container start 6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb  1 09:51:51 np0005604375 zealous_elion[96481]: 167 167
Feb  1 09:51:51 np0005604375 systemd[1]: libpod-6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b.scope: Deactivated successfully.
Feb  1 09:51:51 np0005604375 podman[96463]: 2026-02-01 14:51:51.055140382 +0000 UTC m=+0.119674295 container attach 6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_elion, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:51 np0005604375 podman[96463]: 2026-02-01 14:51:51.055480561 +0000 UTC m=+0.120014424 container died 6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_elion, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:51 np0005604375 podman[96463]: 2026-02-01 14:51:50.962798279 +0000 UTC m=+0.027332132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:51 np0005604375 systemd[1]: var-lib-containers-storage-overlay-4eb5352217bdbbfc855404e8f453b5badcf7112aad64fb0281ca9668239483ae-merged.mount: Deactivated successfully.
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).mds e5 new map
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-02-01T14:51:51:083176+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-01T14:51:37.585458+0000#012modified#0112026-02-01T14:51:51.083173+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14253 members: 14253#012[mds.cephfs.compute-0.agpbju{0:14253} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Feb  1 09:51:51 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju Updating MDS map to version 5 from mon.0
Feb  1 09:51:51 np0005604375 ceph-mds[95382]: mds.0.4 handle_mds_map I am now mds.0.4
Feb  1 09:51:51 np0005604375 ceph-mds[95382]: mds.0.4 handle_mds_map state change up:creating --> up:active
Feb  1 09:51:51 np0005604375 ceph-mds[95382]: mds.0.4 recovery_done -- successful recovery!
Feb  1 09:51:51 np0005604375 ceph-mds[95382]: mds.0.4 active_start
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] up:active
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.agpbju=up:active}
Feb  1 09:51:51 np0005604375 podman[96463]: 2026-02-01 14:51:51.097502076 +0000 UTC m=+0.162035939 container remove 6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_elion, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:51 np0005604375 systemd[1]: libpod-conmon-6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b.scope: Deactivated successfully.
Feb  1 09:51:51 np0005604375 podman[96528]: 2026-02-01 14:51:51.230853745 +0000 UTC m=+0.046278766 container create aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  1 09:51:51 np0005604375 systemd[1]: Started libpod-conmon-aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a.scope.
Feb  1 09:51:51 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:51 np0005604375 podman[96528]: 2026-02-01 14:51:51.215631546 +0000 UTC m=+0.031056587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:51 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca05827000d027791b2718ecb612744794bc3a668d619b7eadefbf7060c2c6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:51 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca05827000d027791b2718ecb612744794bc3a668d619b7eadefbf7060c2c6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:51 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca05827000d027791b2718ecb612744794bc3a668d619b7eadefbf7060c2c6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:51 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca05827000d027791b2718ecb612744794bc3a668d619b7eadefbf7060c2c6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:51 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca05827000d027791b2718ecb612744794bc3a668d619b7eadefbf7060c2c6b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:51 np0005604375 podman[96528]: 2026-02-01 14:51:51.328382994 +0000 UTC m=+0.143808075 container init aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  1 09:51:51 np0005604375 podman[96528]: 2026-02-01 14:51:51.33957453 +0000 UTC m=+0.154999551 container start aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:51 np0005604375 podman[96528]: 2026-02-01 14:51:51.342517753 +0000 UTC m=+0.157942824 container attach aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:51 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} v 0)
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} : dispatch
Feb  1 09:51:51 np0005604375 condescending_mclean[96456]: 
Feb  1 09:51:51 np0005604375 condescending_mclean[96456]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Feb  1 09:51:51 np0005604375 systemd[1]: libpod-a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e.scope: Deactivated successfully.
Feb  1 09:51:51 np0005604375 conmon[96456]: conmon a17697f0d79390b8af6c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e.scope/container/memory.events
Feb  1 09:51:51 np0005604375 podman[96433]: 2026-02-01 14:51:51.371582532 +0000 UTC m=+0.522532481 container died a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e (image=quay.io/ceph/ceph:v20, name=condescending_mclean, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:51 np0005604375 systemd[1]: var-lib-containers-storage-overlay-32990e18280e2ca3315ad8cc98f80d39132172514690ee000cc1e60c7e71be7a-merged.mount: Deactivated successfully.
Feb  1 09:51:51 np0005604375 podman[96433]: 2026-02-01 14:51:51.416327353 +0000 UTC m=+0.567277322 container remove a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e (image=quay.io/ceph/ceph:v20, name=condescending_mclean, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Feb  1 09:51:51 np0005604375 systemd[1]: libpod-conmon-a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e.scope: Deactivated successfully.
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:51 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:51:51 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 33 pg[9.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:51:51 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v71: 9 pgs: 1 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Feb  1 09:51:51 np0005604375 jovial_archimedes[96544]: --> passed data devices: 0 physical, 3 LVM
Feb  1 09:51:51 np0005604375 jovial_archimedes[96544]: --> All data devices are unavailable
Feb  1 09:51:51 np0005604375 systemd[1]: libpod-aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a.scope: Deactivated successfully.
Feb  1 09:51:51 np0005604375 podman[96528]: 2026-02-01 14:51:51.81350617 +0000 UTC m=+0.628931221 container died aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Feb  1 09:51:51 np0005604375 systemd[1]: var-lib-containers-storage-overlay-6ca05827000d027791b2718ecb612744794bc3a668d619b7eadefbf7060c2c6b-merged.mount: Deactivated successfully.
Feb  1 09:51:51 np0005604375 podman[96528]: 2026-02-01 14:51:51.864471836 +0000 UTC m=+0.679896887 container remove aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  1 09:51:51 np0005604375 systemd[1]: libpod-conmon-aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a.scope: Deactivated successfully.
Feb  1 09:51:52 np0005604375 ansible-async_wrapper.py[94886]: Done in kid B.
Feb  1 09:51:52 np0005604375 python3[96667]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:52 np0005604375 podman[96682]: 2026-02-01 14:51:52.34064443 +0000 UTC m=+0.039891416 container create 28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26 (image=quay.io/ceph/ceph:v20, name=intelligent_rhodes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  1 09:51:52 np0005604375 podman[96680]: 2026-02-01 14:51:52.359004637 +0000 UTC m=+0.058070058 container create 1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_banach, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:52 np0005604375 systemd[1]: Started libpod-conmon-28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26.scope.
Feb  1 09:51:52 np0005604375 systemd[1]: Started libpod-conmon-1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361.scope.
Feb  1 09:51:52 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:52 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee400e93a8c8fdd4a3deeb5bfc470eccd6714b3c8ac6faafca2950cdd75285f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee400e93a8c8fdd4a3deeb5bfc470eccd6714b3c8ac6faafca2950cdd75285f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:52 np0005604375 podman[96680]: 2026-02-01 14:51:52.405394795 +0000 UTC m=+0.104460236 container init 1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_banach, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:52 np0005604375 podman[96682]: 2026-02-01 14:51:52.409597873 +0000 UTC m=+0.108844869 container init 28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26 (image=quay.io/ceph/ceph:v20, name=intelligent_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  1 09:51:52 np0005604375 podman[96680]: 2026-02-01 14:51:52.409759248 +0000 UTC m=+0.108824669 container start 1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_banach, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Feb  1 09:51:52 np0005604375 nice_banach[96714]: 167 167
Feb  1 09:51:52 np0005604375 podman[96680]: 2026-02-01 14:51:52.412484875 +0000 UTC m=+0.111550296 container attach 1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_banach, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  1 09:51:52 np0005604375 systemd[1]: libpod-1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361.scope: Deactivated successfully.
Feb  1 09:51:52 np0005604375 podman[96680]: 2026-02-01 14:51:52.413001439 +0000 UTC m=+0.112066860 container died 1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  1 09:51:52 np0005604375 podman[96682]: 2026-02-01 14:51:52.413097632 +0000 UTC m=+0.112344608 container start 28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26 (image=quay.io/ceph/ceph:v20, name=intelligent_rhodes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:52 np0005604375 podman[96682]: 2026-02-01 14:51:52.31936775 +0000 UTC m=+0.018614726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:52 np0005604375 podman[96680]: 2026-02-01 14:51:52.326707897 +0000 UTC m=+0.025773338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:52 np0005604375 podman[96682]: 2026-02-01 14:51:52.42473802 +0000 UTC m=+0.123985226 container attach 28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26 (image=quay.io/ceph/ceph:v20, name=intelligent_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  1 09:51:52 np0005604375 systemd[1]: var-lib-containers-storage-overlay-fdcd4ba8c444964fbd8990fc9aa6a6b8357097de04497ba368419aee66cab732-merged.mount: Deactivated successfully.
Feb  1 09:51:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Feb  1 09:51:52 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb  1 09:51:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Feb  1 09:51:52 np0005604375 podman[96680]: 2026-02-01 14:51:52.467961559 +0000 UTC m=+0.167026970 container remove 1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_banach, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:52 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Feb  1 09:51:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Feb  1 09:51:52 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb  1 09:51:52 np0005604375 systemd[1]: libpod-conmon-1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361.scope: Deactivated successfully.
Feb  1 09:51:52 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 34 pg[10.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [2] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:51:52 np0005604375 podman[96759]: 2026-02-01 14:51:52.592357815 +0000 UTC m=+0.034775471 container create 6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_beaver, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  1 09:51:52 np0005604375 systemd[1]: Started libpod-conmon-6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3.scope.
Feb  1 09:51:52 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6584a7f0267ffcf4a2a458290350426b082cfffaf203167eceb7867a6ee78a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6584a7f0267ffcf4a2a458290350426b082cfffaf203167eceb7867a6ee78a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6584a7f0267ffcf4a2a458290350426b082cfffaf203167eceb7867a6ee78a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6584a7f0267ffcf4a2a458290350426b082cfffaf203167eceb7867a6ee78a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:52 np0005604375 podman[96759]: 2026-02-01 14:51:52.577579189 +0000 UTC m=+0.019996885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:52 np0005604375 podman[96759]: 2026-02-01 14:51:52.69008594 +0000 UTC m=+0.132503636 container init 6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_beaver, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  1 09:51:52 np0005604375 podman[96759]: 2026-02-01 14:51:52.695715339 +0000 UTC m=+0.138133005 container start 6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_beaver, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:52 np0005604375 podman[96759]: 2026-02-01 14:51:52.698895798 +0000 UTC m=+0.141313484 container attach 6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  1 09:51:52 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  1 09:51:52 np0005604375 intelligent_rhodes[96712]: 
Feb  1 09:51:52 np0005604375 intelligent_rhodes[96712]: [{"container_id": "9bd653623727", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.21%", "created": "2026-02-01T14:50:38.747657Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-02-01T14:50:38.821195Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.617737Z", "memory_usage": 7790919, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-02-01T14:50:38.651371Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@crash.compute-0", "version": "20.2.0"}, {"container_id": "7ea15bdd3bc5", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "8.86%", "created": "2026-02-01T14:51:49.007731Z", "daemon_id": "cephfs.compute-0.agpbju", "daemon_name": "mds.cephfs.compute-0.agpbju", "daemon_type": "mds", "events": ["2026-02-01T14:51:49.084836Z daemon:mds.cephfs.compute-0.agpbju [INFO] \"Deployed mds.cephfs.compute-0.agpbju on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.618162Z", "memory_usage": 13537116, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-02-01T14:51:48.892159Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mds.cephfs.compute-0.agpbju", "version": "20.2.0"}, {"container_id": "c0b520f4a011", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "17.86%", "created": "2026-02-01T14:50:02.127621Z", "daemon_id": "compute-0.viosrg", "daemon_name": "mgr.compute-0.viosrg", "daemon_type": "mgr", "events": ["2026-02-01T14:50:42.944653Z daemon:mgr.compute-0.viosrg [INFO] \"Reconfigured mgr.compute-0.viosrg on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.617666Z", "memory_usage": 546203238, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-02-01T14:50:02.023366Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mgr.compute-0.viosrg", "version": "20.2.0"}, {"container_id": "75630865abcd", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.93%", "created": "2026-02-01T14:49:58.016505Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-02-01T14:50:42.376548Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.617574Z", "memory_request": 2147483648, "memory_usage": 40433090, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-02-01T14:50:00.222597Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mon.compute-0", "version": "20.2.0"}, {"container_id": "88ca06885fff", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.43%", "created": "2026-02-01T14:51:00.928311Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-02-01T14:51:00.991275Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.617818Z", "memory_request": 4294967296, "memory_usage": 58615398, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-01T14:51:00.858120Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@osd.0", "version": "20.2.0"}, {"container_id": "751c852b5ece", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.69%", "created": "2026-02-01T14:51:04.396414Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-02-01T14:51:04.483113Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.617922Z", "memory_request": 4294967296, "memory_usage": 57807994, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-01T14:51:04.287775Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@osd.1", "version": "20.2.0"}, {"container_id": "e57f55d1e39c", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.84%", "created": "2026-02-01T14:51:08.085523Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-02-01T14:51:08.160346Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.617991Z", "memory_request": 4294967296, "memory_usage": 56088330, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-01T14:51:07.966157Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@osd.2", "version": "20.2.0"}, {"container_id": "5a12c18d2f79", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac68
Feb  1 09:51:52 np0005604375 systemd[1]: libpod-28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26.scope: Deactivated successfully.
Feb  1 09:51:52 np0005604375 podman[96682]: 2026-02-01 14:51:52.814994931 +0000 UTC m=+0.514241947 container died 28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26 (image=quay.io/ceph/ceph:v20, name=intelligent_rhodes, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:52 np0005604375 systemd[1]: var-lib-containers-storage-overlay-ee400e93a8c8fdd4a3deeb5bfc470eccd6714b3c8ac6faafca2950cdd75285f3-merged.mount: Deactivated successfully.
Feb  1 09:51:52 np0005604375 podman[96682]: 2026-02-01 14:51:52.854348801 +0000 UTC m=+0.553595817 container remove 28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26 (image=quay.io/ceph/ceph:v20, name=intelligent_rhodes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  1 09:51:52 np0005604375 rsyslogd[1001]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "9bd653623727", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb  1 09:51:52 np0005604375 systemd[1]: libpod-conmon-28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26.scope: Deactivated successfully.
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]: {
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:    "0": [
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:        {
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "devices": [
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "/dev/loop3"
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            ],
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_name": "ceph_lv0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_size": "21470642176",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "name": "ceph_lv0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "tags": {
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.crush_device_class": "",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.encrypted": "0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.osd_id": "0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.type": "block",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.vdo": "0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.with_tpm": "0"
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            },
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "type": "block",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "vg_name": "ceph_vg0"
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:        }
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:    ],
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:    "1": [
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:        {
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "devices": [
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "/dev/loop4"
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            ],
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_name": "ceph_lv1",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_size": "21470642176",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "name": "ceph_lv1",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "tags": {
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.crush_device_class": "",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.encrypted": "0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.osd_id": "1",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.type": "block",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.vdo": "0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.with_tpm": "0"
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            },
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "type": "block",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "vg_name": "ceph_vg1"
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:        }
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:    ],
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:    "2": [
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:        {
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "devices": [
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "/dev/loop5"
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            ],
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_name": "ceph_lv2",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_size": "21470642176",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "name": "ceph_lv2",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "tags": {
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.crush_device_class": "",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.encrypted": "0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.osd_id": "2",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.type": "block",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.vdo": "0",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:                "ceph.with_tpm": "0"
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            },
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "type": "block",
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:            "vg_name": "ceph_vg2"
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:        }
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]:    ]
Feb  1 09:51:52 np0005604375 fervent_beaver[96776]: }
Feb  1 09:51:52 np0005604375 systemd[1]: libpod-6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3.scope: Deactivated successfully.
Feb  1 09:51:52 np0005604375 podman[96759]: 2026-02-01 14:51:52.962790818 +0000 UTC m=+0.405208474 container died 6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_beaver, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:53 np0005604375 podman[96759]: 2026-02-01 14:51:53.002493457 +0000 UTC m=+0.444911123 container remove 6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  1 09:51:53 np0005604375 systemd[1]: libpod-conmon-6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3.scope: Deactivated successfully.
Feb  1 09:51:53 np0005604375 systemd[1]: var-lib-containers-storage-overlay-ba6584a7f0267ffcf4a2a458290350426b082cfffaf203167eceb7867a6ee78a-merged.mount: Deactivated successfully.
Feb  1 09:51:53 np0005604375 podman[96874]: 2026-02-01 14:51:53.403673715 +0000 UTC m=+0.040593285 container create 5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_albattani, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  1 09:51:53 np0005604375 systemd[1]: Started libpod-conmon-5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918.scope.
Feb  1 09:51:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Feb  1 09:51:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb  1 09:51:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Feb  1 09:51:53 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:53 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Feb  1 09:51:53 np0005604375 ceph-mgr[75469]: [progress INFO root] Writing back 5 completed events
Feb  1 09:51:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  1 09:51:53 np0005604375 podman[96874]: 2026-02-01 14:51:53.383846616 +0000 UTC m=+0.020766186 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:53 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb  1 09:51:53 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 35 pg[10.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [2] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:51:53 np0005604375 podman[96874]: 2026-02-01 14:51:53.489093763 +0000 UTC m=+0.126013383 container init 5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_albattani, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  1 09:51:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:53 np0005604375 podman[96874]: 2026-02-01 14:51:53.495073971 +0000 UTC m=+0.131993521 container start 5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_albattani, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:53 np0005604375 zen_albattani[96890]: 167 167
Feb  1 09:51:53 np0005604375 systemd[1]: libpod-5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918.scope: Deactivated successfully.
Feb  1 09:51:53 np0005604375 podman[96874]: 2026-02-01 14:51:53.500779182 +0000 UTC m=+0.137698742 container attach 5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_albattani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Feb  1 09:51:53 np0005604375 podman[96874]: 2026-02-01 14:51:53.501258686 +0000 UTC m=+0.138178236 container died 5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  1 09:51:53 np0005604375 systemd[1]: var-lib-containers-storage-overlay-76a965885ccef4b9d6d2a05f508c5bc87ba770e12b698928b1d91cea91f9c42e-merged.mount: Deactivated successfully.
Feb  1 09:51:53 np0005604375 podman[96874]: 2026-02-01 14:51:53.550216076 +0000 UTC m=+0.187135616 container remove 5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_albattani, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Feb  1 09:51:53 np0005604375 systemd[1]: libpod-conmon-5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918.scope: Deactivated successfully.
Feb  1 09:51:53 np0005604375 podman[96940]: 2026-02-01 14:51:53.66139592 +0000 UTC m=+0.031770537 container create aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  1 09:51:53 np0005604375 systemd[1]: Started libpod-conmon-aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f.scope.
Feb  1 09:51:53 np0005604375 python3[96934]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:53 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v74: 10 pgs: 2 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Feb  1 09:51:53 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:53 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b54dfd8f6875ba6bd2cc796647ee46ffd65901d7ee6c3e129de6902b296e788/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:53 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b54dfd8f6875ba6bd2cc796647ee46ffd65901d7ee6c3e129de6902b296e788/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:53 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b54dfd8f6875ba6bd2cc796647ee46ffd65901d7ee6c3e129de6902b296e788/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:53 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b54dfd8f6875ba6bd2cc796647ee46ffd65901d7ee6c3e129de6902b296e788/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:53 np0005604375 podman[96940]: 2026-02-01 14:51:53.739235474 +0000 UTC m=+0.109610111 container init aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:53 np0005604375 podman[96940]: 2026-02-01 14:51:53.647157299 +0000 UTC m=+0.017531946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:53 np0005604375 podman[96940]: 2026-02-01 14:51:53.746212511 +0000 UTC m=+0.116587168 container start aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_elbakyan, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:53 np0005604375 podman[96940]: 2026-02-01 14:51:53.749956926 +0000 UTC m=+0.120331573 container attach aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_elbakyan, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  1 09:51:53 np0005604375 podman[96959]: 2026-02-01 14:51:53.763164029 +0000 UTC m=+0.048100207 container create 255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91 (image=quay.io/ceph/ceph:v20, name=loving_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  1 09:51:53 np0005604375 systemd[1]: Started libpod-conmon-255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91.scope.
Feb  1 09:51:53 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:53 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/280bc7f9563d50d11d2f8003da972411dc23a938ce086f31050c48bd13a0062b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:53 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/280bc7f9563d50d11d2f8003da972411dc23a938ce086f31050c48bd13a0062b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:53 np0005604375 podman[96959]: 2026-02-01 14:51:53.747524408 +0000 UTC m=+0.032460616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:53 np0005604375 podman[96959]: 2026-02-01 14:51:53.849874463 +0000 UTC m=+0.134810661 container init 255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91 (image=quay.io/ceph/ceph:v20, name=loving_babbage, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb  1 09:51:53 np0005604375 podman[96959]: 2026-02-01 14:51:53.853926827 +0000 UTC m=+0.138863055 container start 255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91 (image=quay.io/ceph/ceph:v20, name=loving_babbage, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:53 np0005604375 podman[96959]: 2026-02-01 14:51:53.857375144 +0000 UTC m=+0.142311412 container attach 255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91 (image=quay.io/ceph/ceph:v20, name=loving_babbage, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/326524861' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb  1 09:51:54 np0005604375 loving_babbage[96976]: 
Feb  1 09:51:54 np0005604375 loving_babbage[96976]: {"fsid":"2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":113,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":35,"num_osds":3,"num_up_osds":3,"osd_up_since":1769957475,"num_in_osds":3,"osd_in_since":1769957454,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":8},{"state_name":"unknown","count":1}],"num_pgs":9,"num_pools":9,"num_objects":29,"data_bytes":463390,"bytes_used":83931136,"bytes_avail":64327995392,"bytes_total":64411926528,"unknown_pgs_ratio":0.1111111119389534,"read_bytes_sec":1279,"write_bytes_sec":5374,"read_op_per_sec":0,"write_op_per_sec":13},"fsmap":{"epoch":5,"btime":"2026-02-01T14:51:51:083176+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.agpbju","status":"up:active","gid":14253}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-01T14:51:19.699816+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"2e17c372-c1ad-48d6-8bf0-bbf5585c23cf":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Feb  1 09:51:54 np0005604375 lvm[97071]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:51:54 np0005604375 lvm[97073]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:51:54 np0005604375 lvm[97073]: VG ceph_vg1 finished
Feb  1 09:51:54 np0005604375 lvm[97071]: VG ceph_vg0 finished
Feb  1 09:51:54 np0005604375 systemd[1]: libpod-255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91.scope: Deactivated successfully.
Feb  1 09:51:54 np0005604375 podman[96959]: 2026-02-01 14:51:54.374853332 +0000 UTC m=+0.659789510 container died 255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91 (image=quay.io/ceph/ceph:v20, name=loving_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 09:51:54 np0005604375 lvm[97077]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:51:54 np0005604375 lvm[97077]: VG ceph_vg2 finished
Feb  1 09:51:54 np0005604375 systemd[1]: var-lib-containers-storage-overlay-280bc7f9563d50d11d2f8003da972411dc23a938ce086f31050c48bd13a0062b-merged.mount: Deactivated successfully.
Feb  1 09:51:54 np0005604375 podman[96959]: 2026-02-01 14:51:54.416111235 +0000 UTC m=+0.701047423 container remove 255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91 (image=quay.io/ceph/ceph:v20, name=loving_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb  1 09:51:54 np0005604375 systemd[1]: libpod-conmon-255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91.scope: Deactivated successfully.
Feb  1 09:51:54 np0005604375 dreamy_elbakyan[96957]: {}
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Feb  1 09:51:54 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 36 pg[11.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb  1 09:51:54 np0005604375 systemd[1]: libpod-aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f.scope: Deactivated successfully.
Feb  1 09:51:54 np0005604375 systemd[1]: libpod-aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f.scope: Consumed 1.083s CPU time.
Feb  1 09:51:54 np0005604375 podman[96940]: 2026-02-01 14:51:54.520775775 +0000 UTC m=+0.891150432 container died aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:54 np0005604375 systemd[1]: var-lib-containers-storage-overlay-1b54dfd8f6875ba6bd2cc796647ee46ffd65901d7ee6c3e129de6902b296e788-merged.mount: Deactivated successfully.
Feb  1 09:51:54 np0005604375 podman[96940]: 2026-02-01 14:51:54.563393047 +0000 UTC m=+0.933767704 container remove aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_elbakyan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:54 np0005604375 systemd[1]: libpod-conmon-aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f.scope: Deactivated successfully.
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:55 np0005604375 podman[97224]: 2026-02-01 14:51:55.068799504 +0000 UTC m=+0.043197389 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  1 09:51:55 np0005604375 ceph-mds[95382]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb  1 09:51:55 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mds-cephfs-compute-0-agpbju[95378]: 2026-02-01T14:51:55.096+0000 7efeb15b5640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb  1 09:51:55 np0005604375 podman[97224]: 2026-02-01 14:51:55.209812649 +0000 UTC m=+0.184210494 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  1 09:51:55 np0005604375 python3[97270]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:55 np0005604375 podman[97305]: 2026-02-01 14:51:55.34181708 +0000 UTC m=+0.040858893 container create dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf (image=quay.io/ceph/ceph:v20, name=eager_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:55 np0005604375 systemd[1]: Started libpod-conmon-dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf.scope.
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:51:55 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:55 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e8869a09d4f9ebbf153579aca9c55d69b20d5417084bd9de7fa3d09e74f4f9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:55 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e8869a09d4f9ebbf153579aca9c55d69b20d5417084bd9de7fa3d09e74f4f9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:55 np0005604375 podman[97305]: 2026-02-01 14:51:55.319406868 +0000 UTC m=+0.018448671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:55 np0005604375 podman[97305]: 2026-02-01 14:51:55.426377464 +0000 UTC m=+0.125419317 container init dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf (image=quay.io/ceph/ceph:v20, name=eager_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 09:51:55 np0005604375 podman[97305]: 2026-02-01 14:51:55.434196514 +0000 UTC m=+0.133238297 container start dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf (image=quay.io/ceph/ceph:v20, name=eager_fermi, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:55 np0005604375 podman[97305]: 2026-02-01 14:51:55.437504068 +0000 UTC m=+0.136545941 container attach dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf (image=quay.io/ceph/ceph:v20, name=eager_fermi, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Feb  1 09:51:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 37 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:55 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v77: 11 pgs: 1 creating+activating, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  1 09:51:55 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1076259395' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  1 09:51:55 np0005604375 eager_fermi[97337]: 
Feb  1 09:51:55 np0005604375 eager_fermi[97337]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.eusbkm","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Feb  1 09:51:55 np0005604375 systemd[1]: libpod-dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf.scope: Deactivated successfully.
Feb  1 09:51:55 np0005604375 podman[97460]: 2026-02-01 14:51:55.87712204 +0000 UTC m=+0.027617279 container died dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf (image=quay.io/ceph/ceph:v20, name=eager_fermi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  1 09:51:55 np0005604375 systemd[1]: var-lib-containers-storage-overlay-63e8869a09d4f9ebbf153579aca9c55d69b20d5417084bd9de7fa3d09e74f4f9-merged.mount: Deactivated successfully.
Feb  1 09:51:55 np0005604375 podman[97460]: 2026-02-01 14:51:55.912074895 +0000 UTC m=+0.062570134 container remove dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf (image=quay.io/ceph/ceph:v20, name=eager_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:55 np0005604375 systemd[1]: libpod-conmon-dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf.scope: Deactivated successfully.
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:51:56 np0005604375 podman[97553]: 2026-02-01 14:51:56.42909091 +0000 UTC m=+0.035714928 container create 2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  1 09:51:56 np0005604375 systemd[1]: Started libpod-conmon-2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2.scope.
Feb  1 09:51:56 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:56 np0005604375 podman[97553]: 2026-02-01 14:51:56.497857928 +0000 UTC m=+0.104481976 container init 2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  1 09:51:56 np0005604375 podman[97553]: 2026-02-01 14:51:56.505957977 +0000 UTC m=+0.112581995 container start 2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wing, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Feb  1 09:51:56 np0005604375 nifty_wing[97570]: 167 167
Feb  1 09:51:56 np0005604375 systemd[1]: libpod-2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2.scope: Deactivated successfully.
Feb  1 09:51:56 np0005604375 podman[97553]: 2026-02-01 14:51:56.413708656 +0000 UTC m=+0.020332684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:56 np0005604375 podman[97553]: 2026-02-01 14:51:56.510091013 +0000 UTC m=+0.116715081 container attach 2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  1 09:51:56 np0005604375 podman[97553]: 2026-02-01 14:51:56.510407612 +0000 UTC m=+0.117031640 container died 2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:51:56 np0005604375 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb  1 09:51:56 np0005604375 systemd[1]: var-lib-containers-storage-overlay-797ef62bb2044f616973c6b80112a4a4c8a305c336907dbf4f031c4a2e995921-merged.mount: Deactivated successfully.
Feb  1 09:51:56 np0005604375 podman[97553]: 2026-02-01 14:51:56.55040136 +0000 UTC m=+0.157025388 container remove 2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wing, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:56 np0005604375 systemd[1]: libpod-conmon-2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2.scope: Deactivated successfully.
Feb  1 09:51:56 np0005604375 podman[97637]: 2026-02-01 14:51:56.695367645 +0000 UTC m=+0.035783559 container create aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:56 np0005604375 python3[97613]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:56 np0005604375 systemd[1]: Started libpod-conmon-aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4.scope.
Feb  1 09:51:56 np0005604375 podman[97637]: 2026-02-01 14:51:56.67780748 +0000 UTC m=+0.018223424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:56 np0005604375 podman[97651]: 2026-02-01 14:51:56.780726031 +0000 UTC m=+0.062003219 container create 597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca (image=quay.io/ceph/ceph:v20, name=angry_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:56 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacec4825e2d48d4908a2afefa464d68b2869aba27332fc2d5cf7fdbc77fb128/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacec4825e2d48d4908a2afefa464d68b2869aba27332fc2d5cf7fdbc77fb128/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacec4825e2d48d4908a2afefa464d68b2869aba27332fc2d5cf7fdbc77fb128/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacec4825e2d48d4908a2afefa464d68b2869aba27332fc2d5cf7fdbc77fb128/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacec4825e2d48d4908a2afefa464d68b2869aba27332fc2d5cf7fdbc77fb128/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:56 np0005604375 radosgw[94941]: v1 topic migration: starting v1 topic migration..
Feb  1 09:51:56 np0005604375 radosgw[94941]: v1 topic migration: finished v1 topic migration
Feb  1 09:51:56 np0005604375 podman[97637]: 2026-02-01 14:51:56.822658553 +0000 UTC m=+0.163074477 container init aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:56 np0005604375 podman[97637]: 2026-02-01 14:51:56.83071525 +0000 UTC m=+0.171131164 container start aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:56 np0005604375 podman[97637]: 2026-02-01 14:51:56.835637179 +0000 UTC m=+0.176053263 container attach aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:56 np0005604375 radosgw[94941]: framework: beast
Feb  1 09:51:56 np0005604375 radosgw[94941]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Feb  1 09:51:56 np0005604375 radosgw[94941]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Feb  1 09:51:56 np0005604375 systemd[1]: Started libpod-conmon-597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca.scope.
Feb  1 09:51:56 np0005604375 podman[97651]: 2026-02-01 14:51:56.747501925 +0000 UTC m=+0.028779203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:56 np0005604375 radosgw[94941]: starting handler: beast
Feb  1 09:51:56 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35db5d754cc527248f29882974b2f706affb36c525aa0338a66c729e554fb9c4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35db5d754cc527248f29882974b2f706affb36c525aa0338a66c729e554fb9c4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:56 np0005604375 radosgw[94941]: set uid:gid to 167:167 (ceph:ceph)
Feb  1 09:51:56 np0005604375 radosgw[94941]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.eusbkm,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=519a3cee-c587-4379-95a1-5c7fa227c87c,zone_name=default,zonegroup_id=8a86e5a8-eaaa-443e-b262-61c80d35fad5,zonegroup_name=default}
Feb  1 09:51:56 np0005604375 podman[97651]: 2026-02-01 14:51:56.894158429 +0000 UTC m=+0.175435657 container init 597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca (image=quay.io/ceph/ceph:v20, name=angry_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  1 09:51:56 np0005604375 podman[97651]: 2026-02-01 14:51:56.899142149 +0000 UTC m=+0.180419357 container start 597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca (image=quay.io/ceph/ceph:v20, name=angry_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:56 np0005604375 podman[97651]: 2026-02-01 14:51:56.903993726 +0000 UTC m=+0.185270924 container attach 597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca (image=quay.io/ceph/ceph:v20, name=angry_tu, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:57 np0005604375 eloquent_diffie[97666]: --> passed data devices: 0 physical, 3 LVM
Feb  1 09:51:57 np0005604375 eloquent_diffie[97666]: --> All data devices are unavailable
Feb  1 09:51:57 np0005604375 podman[97637]: 2026-02-01 14:51:57.315089515 +0000 UTC m=+0.655505449 container died aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 09:51:57 np0005604375 systemd[1]: libpod-aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4.scope: Deactivated successfully.
Feb  1 09:51:57 np0005604375 systemd[1]: var-lib-containers-storage-overlay-aacec4825e2d48d4908a2afefa464d68b2869aba27332fc2d5cf7fdbc77fb128-merged.mount: Deactivated successfully.
Feb  1 09:51:57 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Feb  1 09:51:57 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/891653748' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Feb  1 09:51:57 np0005604375 angry_tu[97690]: mimic
Feb  1 09:51:57 np0005604375 podman[97637]: 2026-02-01 14:51:57.376222898 +0000 UTC m=+0.716638832 container remove aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  1 09:51:57 np0005604375 systemd[1]: libpod-597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca.scope: Deactivated successfully.
Feb  1 09:51:57 np0005604375 podman[97651]: 2026-02-01 14:51:57.380584191 +0000 UTC m=+0.661861379 container died 597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca (image=quay.io/ceph/ceph:v20, name=angry_tu, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  1 09:51:57 np0005604375 systemd[1]: libpod-conmon-aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4.scope: Deactivated successfully.
Feb  1 09:51:57 np0005604375 systemd[1]: var-lib-containers-storage-overlay-35db5d754cc527248f29882974b2f706affb36c525aa0338a66c729e554fb9c4-merged.mount: Deactivated successfully.
Feb  1 09:51:57 np0005604375 podman[97651]: 2026-02-01 14:51:57.423273404 +0000 UTC m=+0.704550592 container remove 597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca (image=quay.io/ceph/ceph:v20, name=angry_tu, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  1 09:51:57 np0005604375 systemd[1]: libpod-conmon-597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca.scope: Deactivated successfully.
Feb  1 09:51:57 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v79: 11 pgs: 1 creating+activating, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 241 B/s rd, 483 B/s wr, 1 op/s
Feb  1 09:51:57 np0005604375 podman[97819]: 2026-02-01 14:51:57.831744719 +0000 UTC m=+0.060835146 container create a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  1 09:51:57 np0005604375 systemd[1]: Started libpod-conmon-a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6.scope.
Feb  1 09:51:57 np0005604375 podman[97819]: 2026-02-01 14:51:57.807279779 +0000 UTC m=+0.036370256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:57 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:57 np0005604375 podman[97819]: 2026-02-01 14:51:57.924116543 +0000 UTC m=+0.153207020 container init a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Feb  1 09:51:57 np0005604375 podman[97819]: 2026-02-01 14:51:57.9328944 +0000 UTC m=+0.161984827 container start a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  1 09:51:57 np0005604375 podman[97819]: 2026-02-01 14:51:57.936726568 +0000 UTC m=+0.165817065 container attach a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:51:57 np0005604375 jolly_tu[97835]: 167 167
Feb  1 09:51:57 np0005604375 systemd[1]: libpod-a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6.scope: Deactivated successfully.
Feb  1 09:51:57 np0005604375 conmon[97835]: conmon a7f4815c27e2a7ff3245 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6.scope/container/memory.events
Feb  1 09:51:57 np0005604375 podman[97819]: 2026-02-01 14:51:57.941113522 +0000 UTC m=+0.170203959 container died a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  1 09:51:57 np0005604375 systemd[1]: var-lib-containers-storage-overlay-696822be9df93c20b0d56457f9abc5962dcd4f926436ea595a8726681f269fbe-merged.mount: Deactivated successfully.
Feb  1 09:51:57 np0005604375 podman[97819]: 2026-02-01 14:51:57.985967166 +0000 UTC m=+0.215057593 container remove a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 09:51:57 np0005604375 systemd[1]: libpod-conmon-a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6.scope: Deactivated successfully.
Feb  1 09:51:58 np0005604375 podman[97885]: 2026-02-01 14:51:58.124200103 +0000 UTC m=+0.047846450 container create db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_albattani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  1 09:51:58 np0005604375 systemd[1]: Started libpod-conmon-db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92.scope.
Feb  1 09:51:58 np0005604375 python3[97886]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:51:58 np0005604375 podman[97885]: 2026-02-01 14:51:58.099741284 +0000 UTC m=+0.023387681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:58 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b262f6a610bd7671bf4c6df0462d2369e57ae36ed77d017e6e21ba4c97462504/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b262f6a610bd7671bf4c6df0462d2369e57ae36ed77d017e6e21ba4c97462504/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b262f6a610bd7671bf4c6df0462d2369e57ae36ed77d017e6e21ba4c97462504/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b262f6a610bd7671bf4c6df0462d2369e57ae36ed77d017e6e21ba4c97462504/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:58 np0005604375 podman[97885]: 2026-02-01 14:51:58.229716908 +0000 UTC m=+0.153363265 container init db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_albattani, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:51:58 np0005604375 podman[97885]: 2026-02-01 14:51:58.238922847 +0000 UTC m=+0.162569194 container start db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_albattani, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:58 np0005604375 podman[97885]: 2026-02-01 14:51:58.243550358 +0000 UTC m=+0.167196765 container attach db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_albattani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  1 09:51:58 np0005604375 podman[97905]: 2026-02-01 14:51:58.279415489 +0000 UTC m=+0.066031543 container create 574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f (image=quay.io/ceph/ceph:v20, name=eloquent_gates, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  1 09:51:58 np0005604375 systemd[1]: Started libpod-conmon-574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f.scope.
Feb  1 09:51:58 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:58 np0005604375 podman[97905]: 2026-02-01 14:51:58.254814155 +0000 UTC m=+0.041430269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:51:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1d67b88f99e7c446e9549ffdba1fb3860d9a238a8f354b5e4bf8069c1afc58b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:58 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1d67b88f99e7c446e9549ffdba1fb3860d9a238a8f354b5e4bf8069c1afc58b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:58 np0005604375 podman[97905]: 2026-02-01 14:51:58.365466464 +0000 UTC m=+0.152082578 container init 574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f (image=quay.io/ceph/ceph:v20, name=eloquent_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 09:51:58 np0005604375 podman[97905]: 2026-02-01 14:51:58.36923016 +0000 UTC m=+0.155846204 container start 574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f (image=quay.io/ceph/ceph:v20, name=eloquent_gates, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  1 09:51:58 np0005604375 podman[97905]: 2026-02-01 14:51:58.372362699 +0000 UTC m=+0.158978823 container attach 574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f (image=quay.io/ceph/ceph:v20, name=eloquent_gates, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  1 09:51:58 np0005604375 eager_albattani[97902]: {
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:    "0": [
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:        {
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "devices": [
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "/dev/loop3"
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            ],
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_name": "ceph_lv0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_size": "21470642176",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "name": "ceph_lv0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "tags": {
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.crush_device_class": "",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.encrypted": "0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.osd_id": "0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.type": "block",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.vdo": "0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.with_tpm": "0"
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            },
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "type": "block",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "vg_name": "ceph_vg0"
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:        }
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:    ],
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:    "1": [
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:        {
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "devices": [
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "/dev/loop4"
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            ],
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_name": "ceph_lv1",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_size": "21470642176",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "name": "ceph_lv1",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "tags": {
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.crush_device_class": "",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.encrypted": "0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.osd_id": "1",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.type": "block",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.vdo": "0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.with_tpm": "0"
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            },
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "type": "block",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "vg_name": "ceph_vg1"
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:        }
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:    ],
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:    "2": [
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:        {
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "devices": [
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "/dev/loop5"
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            ],
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_name": "ceph_lv2",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_size": "21470642176",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "name": "ceph_lv2",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "tags": {
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.cluster_name": "ceph",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.crush_device_class": "",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.encrypted": "0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.objectstore": "bluestore",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.osd_id": "2",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.type": "block",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.vdo": "0",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:                "ceph.with_tpm": "0"
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            },
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "type": "block",
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:            "vg_name": "ceph_vg2"
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:        }
Feb  1 09:51:58 np0005604375 eager_albattani[97902]:    ]
Feb  1 09:51:58 np0005604375 eager_albattani[97902]: }
Feb  1 09:51:58 np0005604375 systemd[1]: libpod-db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92.scope: Deactivated successfully.
Feb  1 09:51:58 np0005604375 podman[97885]: 2026-02-01 14:51:58.524271871 +0000 UTC m=+0.447918218 container died db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_albattani, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:51:58 np0005604375 systemd[1]: var-lib-containers-storage-overlay-b262f6a610bd7671bf4c6df0462d2369e57ae36ed77d017e6e21ba4c97462504-merged.mount: Deactivated successfully.
Feb  1 09:51:58 np0005604375 podman[97885]: 2026-02-01 14:51:58.571967485 +0000 UTC m=+0.495613832 container remove db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  1 09:51:58 np0005604375 systemd[1]: libpod-conmon-db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92.scope: Deactivated successfully.
Feb  1 09:51:58 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Feb  1 09:51:58 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/241932449' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Feb  1 09:51:58 np0005604375 eloquent_gates[97922]: 
Feb  1 09:51:58 np0005604375 eloquent_gates[97922]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Feb  1 09:51:58 np0005604375 systemd[1]: libpod-574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f.scope: Deactivated successfully.
Feb  1 09:51:58 np0005604375 podman[97905]: 2026-02-01 14:51:58.910668783 +0000 UTC m=+0.697284797 container died 574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f (image=quay.io/ceph/ceph:v20, name=eloquent_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:51:58 np0005604375 systemd[1]: var-lib-containers-storage-overlay-d1d67b88f99e7c446e9549ffdba1fb3860d9a238a8f354b5e4bf8069c1afc58b-merged.mount: Deactivated successfully.
Feb  1 09:51:58 np0005604375 podman[97905]: 2026-02-01 14:51:58.949410825 +0000 UTC m=+0.736026879 container remove 574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f (image=quay.io/ceph/ceph:v20, name=eloquent_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:51:58 np0005604375 systemd[1]: libpod-conmon-574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f.scope: Deactivated successfully.
Feb  1 09:51:59 np0005604375 podman[98036]: 2026-02-01 14:51:59.031419487 +0000 UTC m=+0.050684810 container create 53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dijkstra, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:51:59 np0005604375 systemd[1]: Started libpod-conmon-53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d.scope.
Feb  1 09:51:59 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:59 np0005604375 podman[98036]: 2026-02-01 14:51:59.013714028 +0000 UTC m=+0.032979311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:59 np0005604375 podman[98036]: 2026-02-01 14:51:59.117528885 +0000 UTC m=+0.136794268 container init 53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 09:51:59 np0005604375 podman[98036]: 2026-02-01 14:51:59.125772897 +0000 UTC m=+0.145038220 container start 53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dijkstra, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  1 09:51:59 np0005604375 podman[98036]: 2026-02-01 14:51:59.129246575 +0000 UTC m=+0.148511948 container attach 53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dijkstra, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:51:59 np0005604375 serene_dijkstra[98052]: 167 167
Feb  1 09:51:59 np0005604375 systemd[1]: libpod-53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d.scope: Deactivated successfully.
Feb  1 09:51:59 np0005604375 podman[98036]: 2026-02-01 14:51:59.133367391 +0000 UTC m=+0.152632704 container died 53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dijkstra, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  1 09:51:59 np0005604375 systemd[1]: var-lib-containers-storage-overlay-ad6abee841fa0d5701f0d14300cf32e8f3656ea4c005d91941e44dc156aae4d2-merged.mount: Deactivated successfully.
Feb  1 09:51:59 np0005604375 podman[98036]: 2026-02-01 14:51:59.179732688 +0000 UTC m=+0.198998001 container remove 53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dijkstra, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  1 09:51:59 np0005604375 systemd[1]: libpod-conmon-53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d.scope: Deactivated successfully.
Feb  1 09:51:59 np0005604375 podman[98076]: 2026-02-01 14:51:59.354112604 +0000 UTC m=+0.057519983 container create 25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  1 09:51:59 np0005604375 systemd[1]: Started libpod-conmon-25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e.scope.
Feb  1 09:51:59 np0005604375 podman[98076]: 2026-02-01 14:51:59.333135112 +0000 UTC m=+0.036542561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:51:59 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:51:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8427402bf546bb2f84d3a32280e2ec6780a4975923a509911230211ec472d3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8427402bf546bb2f84d3a32280e2ec6780a4975923a509911230211ec472d3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8427402bf546bb2f84d3a32280e2ec6780a4975923a509911230211ec472d3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8427402bf546bb2f84d3a32280e2ec6780a4975923a509911230211ec472d3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:51:59 np0005604375 podman[98076]: 2026-02-01 14:51:59.462054147 +0000 UTC m=+0.165461556 container init 25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  1 09:51:59 np0005604375 podman[98076]: 2026-02-01 14:51:59.473733276 +0000 UTC m=+0.177140655 container start 25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 09:51:59 np0005604375 podman[98076]: 2026-02-01 14:51:59.477253475 +0000 UTC m=+0.180660854 container attach 25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  1 09:51:59 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v80: 11 pgs: 1 creating+activating, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 1 op/s
Feb  1 09:52:00 np0005604375 lvm[98170]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:52:00 np0005604375 lvm[98170]: VG ceph_vg1 finished
Feb  1 09:52:00 np0005604375 lvm[98169]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:52:00 np0005604375 lvm[98169]: VG ceph_vg0 finished
Feb  1 09:52:00 np0005604375 lvm[98172]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:52:00 np0005604375 lvm[98172]: VG ceph_vg2 finished
Feb  1 09:52:00 np0005604375 boring_chandrasekhar[98091]: {}
Feb  1 09:52:00 np0005604375 systemd[1]: libpod-25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e.scope: Deactivated successfully.
Feb  1 09:52:00 np0005604375 systemd[1]: libpod-25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e.scope: Consumed 1.077s CPU time.
Feb  1 09:52:00 np0005604375 podman[98076]: 2026-02-01 14:52:00.169982283 +0000 UTC m=+0.873389652 container died 25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 09:52:00 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f8427402bf546bb2f84d3a32280e2ec6780a4975923a509911230211ec472d3a-merged.mount: Deactivated successfully.
Feb  1 09:52:00 np0005604375 podman[98076]: 2026-02-01 14:52:00.205902845 +0000 UTC m=+0.909310214 container remove 25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  1 09:52:00 np0005604375 systemd[1]: libpod-conmon-25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e.scope: Deactivated successfully.
Feb  1 09:52:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:52:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:52:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:52:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:52:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:52:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:52:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:52:01 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v81: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 9.2 KiB/s wr, 197 op/s
Feb  1 09:52:03 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event 2e17c372-c1ad-48d6-8bf0-bbf5585c23cf (Global Recovery Event) in 15 seconds
Feb  1 09:52:03 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v82: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 8.0 KiB/s wr, 173 op/s
Feb  1 09:52:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:52:05 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v83: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.4 KiB/s wr, 141 op/s
Feb  1 09:52:07 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v84: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 5.7 KiB/s wr, 126 op/s
Feb  1 09:52:08 np0005604375 ceph-mgr[75469]: [progress INFO root] Writing back 6 completed events
Feb  1 09:52:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  1 09:52:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:52:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:52:09 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v85: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb  1 09:52:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:52:11 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v86: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb  1 09:52:13 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v87: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:52:15 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:52:17
Feb  1 09:52:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 09:52:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 09:52:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'backups', 'volumes', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta']
Feb  1 09:52:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 09:52:17 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.243253721423607e-07 of space, bias 4.0, pg target 0.0007491904465708329 quantized to 16 (current 1)
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:52:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Feb  1 09:52:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:52:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Feb  1 09:52:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Feb  1 09:52:18 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Feb  1 09:52:18 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev 5f865ac9-5821-461d-bf71-3fd7b8b7d9e9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb  1 09:52:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Feb  1 09:52:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:19 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Feb  1 09:52:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Feb  1 09:52:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Feb  1 09:52:19 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Feb  1 09:52:19 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev cd02d19a-bf29-4c1f-aab0-1f16f44d0f44 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb  1 09:52:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Feb  1 09:52:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:52:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Feb  1 09:52:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Feb  1 09:52:20 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Feb  1 09:52:20 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev f0be4e48-5081-43b2-a261-e596203beb2b (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb  1 09:52:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Feb  1 09:52:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:20 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:20 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:20 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:20 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:21 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v94: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Feb  1 09:52:21 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev 9d856795-73d8-4b3a-a173-83651471199a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb  1 09:52:21 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42 pruub=12.305476189s) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active pruub 92.813713074s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb  1 09:52:21 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42 pruub=12.305476189s) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown pruub 92.813713074s@ mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:21 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40 pruub=9.358250618s) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active pruub 83.583534241s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42 pruub=10.378397942s) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active pruub 88.302650452s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42 pruub=10.378397942s) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown pruub 88.302650452s@ mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40 pruub=9.358250618s) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown pruub 83.583534241s@ mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Feb  1 09:52:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb  1 09:52:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Feb  1 09:52:22 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Feb  1 09:52:22 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev fd83c393-8d35-4899-98de-8e27e64bea40 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb  1 09:52:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Feb  1 09:52:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.15( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.17( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.16( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1e( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=40/43 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.e( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.c( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.12( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.10( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.14( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1a( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=42/43 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=42/43 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.15( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.17( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.16( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb  1 09:52:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:23 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Feb  1 09:52:23 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Feb  1 09:52:23 np0005604375 ceph-mgr[75469]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Feb  1 09:52:23 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v97: 104 pgs: 93 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Feb  1 09:52:23 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 44 pg[6.0( v 32'39 (0'0,32'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=44 pruub=12.310206413s) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 31'38 mlcod 31'38 active pruub 94.830680847s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:23 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev 5770ec13-3dda-4253-ab1e-ee301548257c (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:23 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 44 pg[6.0( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=44 pruub=12.310206413s) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 31'38 mlcod 0'0 unknown pruub 94.830680847s@ mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44 pruub=11.096881866s) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active pruub 86.617851257s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44 pruub=11.096881866s) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown pruub 86.617851257s@ mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Feb  1 09:52:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Feb  1 09:52:24 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Feb  1 09:52:24 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev ee10f24e-a116-4aee-ae4a-5595d10d2b8e (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb  1 09:52:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Feb  1 09:52:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.17( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.8( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.a( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=21/22 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.b( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.6( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.e( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.d( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.0( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 31'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.17( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.8( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.a( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=44/45 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.6( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.e( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.b( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.d( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:24 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:24 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:25 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Feb  1 09:52:25 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:52:25 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v100: 150 pgs: 46 unknown, 104 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Feb  1 09:52:25 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=46 pruub=11.298465729s) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active pruub 92.332054138s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:25 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 46 pg[8.0( v 31'6 (0'0,31'6] local-lis/les=30/31 n=6 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=46 pruub=11.514460564s) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 31'5 mlcod 31'5 active pruub 92.548133850s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:25 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev 27d05e9a-17d4-4f6a-8d65-1d5c2a8f17c3 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:25 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 46 pg[8.0( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=46 pruub=11.514460564s) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 31'5 mlcod 0'0 unknown pruub 92.548133850s@ mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:25 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=46 pruub=11.298465729s) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown pruub 92.332054138s@ mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:26 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Feb  1 09:52:26 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Feb  1 09:52:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Feb  1 09:52:26 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Feb  1 09:52:26 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Feb  1 09:52:26 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev 2cbf1a6f-1387-4fc6-b78d-aef03e2d80a2 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb  1 09:52:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Feb  1 09:52:26 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1c( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1d( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.13( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1e( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.11( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.12( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1f( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.10( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.17( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.18( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.19( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.16( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1a( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1b( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.15( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.14( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.4( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.b( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.a( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.6( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.9( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.7( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.8( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.2( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.d( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.9( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.b( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.6( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.4( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.f( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.5( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.f( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1( v 31'6 (0'0,31'6] local-lis/les=30/31 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.3( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.c( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.e( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.a( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.5( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.8( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.7( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.e( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.d( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.2( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.c( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.3( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.13( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.12( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1c( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.11( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1e( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.10( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1f( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.17( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.18( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.16( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.19( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.15( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1a( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1d( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.14( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1b( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1c( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1d( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1e( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1f( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.12( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.10( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.18( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.19( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.17( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.16( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1b( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1a( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.14( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.4( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.6( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.d( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.9( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.7( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.b( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.2( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=46/47 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.0( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 31'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.f( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.3( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.a( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.8( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.7( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.e( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.c( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.5( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.13( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.d( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.12( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.11( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.17( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.16( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.10( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.19( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.15( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.14( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1d( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:26 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:26 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb  1 09:52:27 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Feb  1 09:52:27 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v103: 212 pgs: 62 unknown, 150 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 48 pg[9.0( v 38'483 (0'0,38'483] local-lis/les=32/33 n=210 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=48 pruub=11.519697189s) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 38'482 mlcod 38'482 active pruub 94.567817688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev 0c710704-685a-423d-80a4-a5bae645d96a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev 5f865ac9-5821-461d-bf71-3fd7b8b7d9e9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event 5f865ac9-5821-461d-bf71-3fd7b8b7d9e9 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev cd02d19a-bf29-4c1f-aab0-1f16f44d0f44 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event cd02d19a-bf29-4c1f-aab0-1f16f44d0f44 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev f0be4e48-5081-43b2-a261-e596203beb2b (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event f0be4e48-5081-43b2-a261-e596203beb2b (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev 9d856795-73d8-4b3a-a173-83651471199a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event 9d856795-73d8-4b3a-a173-83651471199a (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev fd83c393-8d35-4899-98de-8e27e64bea40 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event fd83c393-8d35-4899-98de-8e27e64bea40 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev 5770ec13-3dda-4253-ab1e-ee301548257c (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event 5770ec13-3dda-4253-ab1e-ee301548257c (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev ee10f24e-a116-4aee-ae4a-5595d10d2b8e (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event ee10f24e-a116-4aee-ae4a-5595d10d2b8e (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev 27d05e9a-17d4-4f6a-8d65-1d5c2a8f17c3 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event 27d05e9a-17d4-4f6a-8d65-1d5c2a8f17c3 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev 2cbf1a6f-1387-4fc6-b78d-aef03e2d80a2 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event 2cbf1a6f-1387-4fc6-b78d-aef03e2d80a2 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev 0c710704-685a-423d-80a4-a5bae645d96a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb  1 09:52:27 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event 0c710704-685a-423d-80a4-a5bae645d96a (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 48 pg[9.0( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=48 pruub=11.519697189s) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 38'482 mlcod 0'0 unknown pruub 94.567817688s@ mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc65600 space 0x55a03c028240 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc64300 space 0x55a03c3ceb40 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc5a700 space 0x55a03d1aae40 0x0~98 clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc15900 space 0x55a03d1fe540 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc15780 space 0x55a03cd8ab40 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc78180 space 0x55a03c03c840 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc78080 space 0x55a03c49ae40 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21280 space 0x55a03c50c540 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccd2580 space 0x55a03c4ee840 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc20580 space 0x55a03c515a40 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc15880 space 0x55a03c568840 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccd2100 space 0x55a03c49a540 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc5b500 space 0x55a03c50d440 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21b80 space 0x55a03c2f3740 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc27880 space 0x55a03c558540 0x0~98 clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21580 space 0x55a03c569140 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc65000 space 0x55a03c001740 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26d00 space 0x55a03c50d740 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21780 space 0x55a03c569a40 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc99080 space 0x55a03c49b740 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc27d80 space 0x55a03c510e40 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc27b00 space 0x55a03c4cdd40 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cce0d00 space 0x55a03c463a40 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21d00 space 0x55a03c4cd440 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc15380 space 0x55a03c463740 0x0~98 clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccd2880 space 0x55a03c4cc240 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26380 space 0x55a03c000b40 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc66f80 space 0x55a03c515440 0x0~98 clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc64900 space 0x55a03c416b40 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26b80 space 0x55a03c58bd40 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26a00 space 0x55a03c559140 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21100 space 0x55a03c50ce40 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc67900 space 0x55a03c32a240 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26300 space 0x55a03c416240 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc20e80 space 0x55a03c302e40 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc65580 space 0x55a03c511d40 0x0~98 clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc20c80 space 0x55a03c4efa40 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21300 space 0x55a03c3cf440 0x0~98 clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc67700 space 0x55a03c02dd40 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc67c00 space 0x55a03c49bd40 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc20780 space 0x55a03c4ef140 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccd2900 space 0x55a03c4ccb40 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc64780 space 0x55a03c00d140 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cce0c00 space 0x55a03c463140 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc15400 space 0x55a03d1dbd40 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccf0c80 space 0x55a03c02c240 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc65880 space 0x55a03c50cb40 0x0~98 clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc14980 space 0x55a03c462840 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc64b00 space 0x55a03c32ba40 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26500 space 0x55a03c58a240 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc67980 space 0x55a03c02cb40 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccd3b00 space 0x55a03c510540 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc64e80 space 0x55a03c514b40 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccf0b80 space 0x55a03c303740 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc64500 space 0x55a03c02d440 0x0~9a clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26780 space 0x55a03c58ab40 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccf0d00 space 0x55a03c302540 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc20c00 space 0x55a03c2f2840 0x0~98 clean)
Feb  1 09:52:27 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc27f80 space 0x55a03c511740 0x0~6e clean)
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:27 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Feb  1 09:52:28 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Feb  1 09:52:28 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Feb  1 09:52:28 np0005604375 ceph-mgr[75469]: [progress INFO root] Writing back 16 completed events
Feb  1 09:52:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  1 09:52:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:52:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Feb  1 09:52:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Feb  1 09:52:28 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.15( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.14( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.17( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.16( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.11( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.10( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.13( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.12( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.d( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.c( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.f( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.9( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.b( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.2( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.e( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.a( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.8( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.6( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.3( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.7( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.5( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1a( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.4( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1b( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.18( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.19( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1e( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1f( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1c( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1d( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.14( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.10( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.0( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 38'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.12( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.2( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.e( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.a( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.5( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1a( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.18( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1e( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1c( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.4( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:28 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:52:29 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Feb  1 09:52:29 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 48 pg[10.0( v 38'18 (0'0,38'18] local-lis/les=34/35 n=9 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48 pruub=12.279128075s) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 38'17 mlcod 38'17 active pruub 92.903282166s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.0( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48 pruub=12.279128075s) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 38'17 mlcod 0'0 unknown pruub 92.903282166s@ mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.3( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.4( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.5( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.6( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.2( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.7( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.8( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.9( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.a( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.c( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.b( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.d( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.e( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.f( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.10( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.11( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.12( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.13( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.14( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.15( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.16( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.17( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.18( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.19( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1a( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1b( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1c( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1d( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1e( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1f( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:29 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Feb  1 09:52:29 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Feb  1 09:52:29 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v106: 274 pgs: 124 unknown, 150 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:29 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Feb  1 09:52:29 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:29 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Feb  1 09:52:29 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:29 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Feb  1 09:52:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  1 09:52:29 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.12( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1d( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.10( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1f( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1e( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1a( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1b( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.19( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.18( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.6( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:29 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.11( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.7( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.4( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.f( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.5( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1c( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.8( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.9( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.0( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 38'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.3( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.a( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.b( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.c( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.d( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.e( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.2( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.14( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.13( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.17( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.15( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.16( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Feb  1 09:52:30 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Feb  1 09:52:30 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=13.110879898s) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active pruub 98.637939453s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:30 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=13.110879898s) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown pruub 98.637939453s@ mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:52:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Feb  1 09:52:30 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb  1 09:52:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Feb  1 09:52:31 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.17( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.16( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.14( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.15( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.13( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.12( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.11( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.10( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.f( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.e( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.d( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.b( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.9( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.3( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.2( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.c( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.8( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.a( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.4( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.6( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.5( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.7( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.19( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1a( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1b( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1c( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.18( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1d( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1e( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1f( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.17( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.16( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.13( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.12( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.11( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.14( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.15( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.10( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.f( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.9( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.e( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.b( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=50/51 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.3( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.d( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.8( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.a( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.4( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1a( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.c( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.6( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.19( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.5( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.7( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1c( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.18( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1d( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1e( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1f( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1b( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.2( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:31 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.b scrub starts
Feb  1 09:52:31 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.b scrub ok
Feb  1 09:52:31 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v109: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:32 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Feb  1 09:52:32 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Feb  1 09:52:33 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:34 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Feb  1 09:52:34 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Feb  1 09:52:35 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Feb  1 09:52:35 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:52:35 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v111: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  1 09:52:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Feb  1 09:52:36 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.11( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.948302269s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422180176s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945576668s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 active pruub 97.419540405s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.11( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.948179245s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422180176s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1e( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870571136s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344612122s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945492744s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 97.419540405s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1d( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870471001s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344543457s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1e( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870528221s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344612122s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1d( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870384216s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344543457s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.10( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.947529793s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.421836853s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.10( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.947505951s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.421836853s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.18( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856965065s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331375122s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.18( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856930733s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331375122s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.19( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856827736s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331352234s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.19( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856798172s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331352234s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.16( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856406212s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331367493s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.11( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.869767189s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344749451s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.16( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856370926s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331367493s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.11( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.869735718s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344749451s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.15( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856076241s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331306458s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.15( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856043816s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331306458s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.1e( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.946708679s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422088623s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.12( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.869020462s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344444275s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.17( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.855978012s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331413269s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.12( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868986130s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344444275s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.17( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.855928421s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331413269s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.13( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.869002342s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344566345s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.13( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868979454s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344566345s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.13( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.855505943s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331291199s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.14( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868963242s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344741821s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.1a( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.946277618s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422088623s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.14( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868942261s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344741821s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.1a( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.946249962s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422088623s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.15( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868713379s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344757080s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.19( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.946063042s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422157288s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.15( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868686676s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344757080s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.19( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.946038246s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422157288s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.11( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.854987144s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331245422s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.11( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.854964256s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331245422s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.13( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.854922295s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331291199s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.16( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868770599s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345214844s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.16( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868749619s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345214844s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.7( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945592880s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422241211s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.f( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.854549408s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331207275s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.1e( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945425034s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422088623s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.7( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945536613s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422241211s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.f( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.854493141s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331207275s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.9( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868156433s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345024109s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.6( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945334435s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422195435s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.9( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868131638s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345024109s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.6( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945268631s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422195435s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.d( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.853488922s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.330566406s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.14( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.d( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.853464127s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.330566406s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.4( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945116997s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422271729s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.4( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945092201s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422271729s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.b( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.853004456s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.330314636s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.b( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.852982521s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.330314636s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.8( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945018768s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422370911s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.15( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.c( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.867769241s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345153809s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.7( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.867607117s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345199585s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.f( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944686890s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422286987s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.7( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.851965904s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329574585s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.7( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.867571831s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345199585s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.7( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.851943016s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329574585s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.f( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944660187s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422286987s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.8( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944940567s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422370911s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.8( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.851665497s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329559326s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.8( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.851644516s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329559326s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.f( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.867300034s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345275879s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944346428s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 active pruub 97.422409058s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.16( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.f( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.867216110s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345275879s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.2( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.851188660s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329292297s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944303513s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 97.422409058s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.2( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.851168633s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329292297s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.b( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944171906s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422485352s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.5( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870786667s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.349098206s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.11( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.3( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850994110s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329330444s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.b( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944148064s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422485352s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.5( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870760918s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.349098206s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.3( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850971222s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329330444s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.4( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870546341s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.349105835s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.4( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850893021s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329460144s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.18( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.3( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866640091s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345245361s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.4( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870521545s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.349105835s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.4( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850867271s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329460144s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.3( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866616249s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345245361s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943723679s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 active pruub 97.422523499s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.5( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850354195s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329200745s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943688393s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 97.422523499s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.5( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850330353s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329200745s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.17( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.2( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866353035s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345275879s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943520546s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 active pruub 97.422538757s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.1e( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.19( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.2( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866267204s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345275879s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.6( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850158691s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329193115s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.6( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850138664s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329193115s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.13( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943484306s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 97.422538757s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866081238s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345283508s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943339348s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422576904s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.9( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.849864960s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329116821s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.c( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866742134s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345153809s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.15( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866059303s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345283508s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.11( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943315506s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422576904s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.9( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.849838257s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329116821s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.a( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.849318504s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.328773499s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.2( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943099976s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422584534s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.13( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.a( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.849292755s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.328773499s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.2( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943069458s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422584534s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1b( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.844075203s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.323715210s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.12( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.1a( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942911148s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 active pruub 97.422615051s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1b( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.844048500s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.323715210s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1c( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.843911171s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.323646545s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942868233s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 97.422615051s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1c( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.843870163s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.323646545s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942697525s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 active pruub 97.422653198s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1d( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.843700409s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.323715210s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.19( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942648888s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 97.422653198s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1a( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.869227409s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.349296570s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1d( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.843660355s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.323715210s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1a( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.869144440s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.349296570s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.19( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868545532s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.349266052s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1f( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850687027s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331428528s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.19( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868513107s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.349266052s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.16( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942833900s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.423614502s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848082542s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527412415s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.863107681s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.542610168s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848316193s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527832031s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847899437s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527442932s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.863079071s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.542610168s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848278999s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527832031s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847791672s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527442932s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.859298706s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.539077759s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.859272957s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539077759s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847864151s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527755737s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.16( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848148346s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.528121948s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847785950s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527755737s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848121643s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.528121948s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.862565994s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.542617798s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.862540245s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.542617798s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847656250s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527854919s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847715378s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527954102s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847633362s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527854919s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858572006s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.538825989s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847688675s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527954102s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858549118s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.538825989s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.9( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.6( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.16( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942800522s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.423614502s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.18( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868285179s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.349220276s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.18( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868234634s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.349220276s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.13( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942672729s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422653198s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847630501s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527999878s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847601891s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527999878s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847013474s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527412415s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847607613s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.528076172s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847585678s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.528076172s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858558655s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.539123535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858534813s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539123535s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858412743s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.539138794s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847568512s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.528335571s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858382225s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539138794s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847545624s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.528335571s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849721909s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530708313s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849698067s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530708313s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1f( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850545883s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331428528s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.13( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.941585541s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422653198s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.d( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.17( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.941538811s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422698975s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.17( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.941518784s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422698975s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.1b( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.a( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.f( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.7( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849439621s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530769348s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858036041s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.539375305s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.1a( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858005524s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539375305s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849103928s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530548096s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849063873s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530548096s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.11( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849410057s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530769348s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.857892990s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.539421082s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.857870102s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539421082s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848866463s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530563354s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849171638s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530891418s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.10( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.12( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.1c( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849019051s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530761719s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849148750s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530891418s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848813057s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530563354s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.1d( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848981857s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530761719s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848928452s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530754089s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848891258s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530754089s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848884583s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530899048s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848855972s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530906677s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848856926s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530899048s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848832130s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530906677s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.1( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.e( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848437309s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530899048s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848410606s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530899048s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.f( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.7( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.4( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.b( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.14( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.871395111s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056854248s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.876036644s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061561584s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.14( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.871347427s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056854248s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.876002312s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061561584s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.1e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.842407227s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.028205872s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.842374802s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.028205872s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.870972633s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056861877s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.870937347s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056861877s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.7( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.15( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.870798111s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056823730s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.15( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.870768547s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056823730s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.8( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.951157570s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137321472s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.951132774s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137321472s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.842537880s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.028846741s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.842505455s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.028846741s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.870776176s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.057128906s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.870702744s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.057128906s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.b( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.8( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.837202072s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.023963928s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.837156296s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.023963928s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.3( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.9( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.2( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.5( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.949327469s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137107849s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.949280739s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137107849s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.872479439s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.060394287s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.872416496s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.060394287s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.868700981s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056846619s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.868659973s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056846619s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.840605736s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.028900146s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.868402481s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056755066s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.868370056s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056755066s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.840518951s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.028900146s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.10( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.868306160s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056808472s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.10( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.868268967s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056808472s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.872969627s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061561584s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.872942924s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061561584s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947257996s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.136054993s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947229385s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.136054993s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.11( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.867700577s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056541443s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.948195457s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137062073s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947972298s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137062073s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.4( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.3( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.d( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.2( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.11( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947851181s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137062073s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.12( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.867271423s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056533813s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.872364998s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061576843s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947780609s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137062073s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.12( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.867238045s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056533813s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.872241974s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061576843s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.11( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.867666245s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056541443s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947936058s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137374878s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.841079712s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030624390s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947895050s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137374878s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.841053963s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030624390s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866762161s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056442261s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866786957s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056556702s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866764069s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056556702s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.1( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947862625s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137649536s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.c( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866366386s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056198120s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.c( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866345406s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056198120s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947803497s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137649536s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.871632576s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061660767s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.871613503s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061660767s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947594643s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137657166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.840612411s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030738831s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947533607s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137657166s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.1c( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866209030s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056358337s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.840570450s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030738831s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866174698s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056358337s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.840394974s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030685425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866731644s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056442261s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.840289116s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030685425s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.4( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.d( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.865409851s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056442261s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.946555138s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137657166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.d( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.865369797s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056442261s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.15( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.865055084s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056182861s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.946516037s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137657166s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.865015030s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056182861s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.e( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.860674858s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.052162170s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.870164871s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061729431s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.13( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.e( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.860606194s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.052162170s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.946076393s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137687683s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.870128632s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061729431s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.946034431s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137687683s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.839039803s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030761719s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.838968277s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030761719s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.839347839s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031219482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.839310646s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031219482s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.873323441s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.065277100s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.873285294s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.065277100s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.945528984s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137664795s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.838544846s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030708313s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.859667778s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.051811218s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.838519096s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030708313s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.859598160s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.051811218s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.839510918s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031959534s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.869253159s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061729431s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.839484215s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031959534s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.946880341s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139411926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.869215965s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061729431s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.946842194s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139411926s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.859191895s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.051849365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.945391655s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137664795s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.859155655s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.051849365s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.859388351s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.052131653s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.859363556s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.052131653s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.837987900s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030799866s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.1d( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.16( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.837941170s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030799866s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.1f( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.17( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.5( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.14( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.18( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.944193840s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137748718s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.944156647s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137748718s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.857801437s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.051589966s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.857769012s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.051589966s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.867748260s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061843872s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.867710114s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061843872s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.f( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.857240677s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.051589966s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.f( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.857217789s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.051589966s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.944826126s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139320374s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.856967926s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.051498413s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.856949806s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.051498413s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.944797516s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139320374s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.b( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.856684685s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.051376343s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.b( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.856664658s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.051376343s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.856701851s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.051490784s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.856669426s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.051490784s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.6( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.1b( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.1f( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.14( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.9( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.855905533s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.051216125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.9( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.855861664s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.051216125s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.943911552s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139404297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.943844795s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139404297s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.1( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.1e( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.1a( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.18( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.c( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.1f( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.1b( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.15( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.15( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.9( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.a( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.10( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.1d( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.2( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.11( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.1b( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.17( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.12( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.14( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.10( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.1a( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.19( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.3( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.c( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.18( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.f( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.11( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.13( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.12( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.e( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.6( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.18( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.826411247s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030937195s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.8( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.826364517s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030937195s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.2( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.2( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.846562386s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.051414490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.2( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.846531868s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.051414490s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.859771729s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.064926147s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.825776100s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030952454s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.934087753s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139343262s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.825731277s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030952454s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.934059143s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139343262s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.e( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.843759537s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.049308777s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.843726158s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.049308777s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.7( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.6( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.845353127s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.051208496s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.6( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.845333099s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.051208496s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.858901978s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.065002441s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.933281898s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139411926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.858864784s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.065002441s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.933259010s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139411926s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.842927933s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.049301147s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.1c( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.842905998s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.049301147s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.824503899s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031005859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.844851494s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.051193237s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.824473381s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031005859s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.7( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.844413757s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.051193237s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.9( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.858564377s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.064926147s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.3( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.7( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.1( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.5( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.a( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.b( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.f( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.9( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.4( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.d( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.d( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.1( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.3( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.2( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.f( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.d( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.f( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.10( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.12( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.14( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.4( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.b( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.6( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.9( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.1( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.4( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.834575653s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.049278259s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.4( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.834532738s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.049278259s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.816321373s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031219482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.816288948s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031219482s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.1( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.849802017s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 38'483 active pruub 100.064933777s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.849752426s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 38'483 unknown NOTIFY pruub 100.064933777s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.9( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.924089432s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139511108s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.c( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.924052238s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139511108s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.4( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.9( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.6( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1b( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.833435059s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.049072266s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1b( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.833395004s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.049072266s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.923677444s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139442444s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.923585892s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139442444s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.815390587s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031356812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.833096504s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.049079895s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.815352440s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031356812s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.833050728s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.049079895s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1a( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.832967758s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.049087524s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.848978996s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.065254211s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.b( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.923089981s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139411926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.848951340s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.065254211s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.923061371s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139411926s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.923016548s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139610291s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.816305161s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.032897949s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.922987938s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139610291s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.18( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.832287788s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.048957825s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.816262245s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.032897949s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.6( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.18( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.832257271s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.048957825s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.848461151s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.065414429s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.848436356s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.065414429s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.922493935s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139488220s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.922467232s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139488220s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1a( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.832938194s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.049087524s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.5( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1f( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.831164360s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.048934937s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1f( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.831125259s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.048934937s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.847593307s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.065498352s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.830725670s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.048667908s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.847569466s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.065498352s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.813379288s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031364441s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.830689430s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.048667908s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.813345909s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031364441s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.921483994s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139564514s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.921455383s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139564514s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.813126564s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031387329s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.813084602s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031387329s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.921098709s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139610291s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.830430984s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.049003601s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.921034813s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139610291s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.830393791s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.049003601s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.846801758s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.065437317s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.846771240s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.065437317s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1c( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.829882622s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.048683167s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1c( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.829850197s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.048683167s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.5( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.2( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1d( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.829357147s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.048583984s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1d( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.827646255s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.048583984s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.9( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.809946060s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031440735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:36 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.809902191s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031440735s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.8( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.e( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.c( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.3( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.f( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.19( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.8( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.12( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.18( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.2( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.a( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.e( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.8( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.1a( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.4( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.1f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.1b( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.18( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.11( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.15( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.13( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.15( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.1a( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.1b( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.1c( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.11( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.1e( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.16( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.1f( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.1c( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.17( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.1d( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Feb  1 09:52:36 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.15( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.11( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.2( v 31'6 (0'0,31'6] local-lis/les=52/53 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.d( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.4( v 31'6 (0'0,31'6] local-lis/les=52/53 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.14( v 50'19 lc 35'7 (0'0,50'19] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.12( v 50'19 lc 38'17 (0'0,50'19] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.13( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.10( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.11( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.f( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.14( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.16( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.d( v 32'39 lc 31'13 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.f( v 32'39 lc 31'1 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.2( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.5( v 32'39 lc 31'11 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.7( v 32'39 lc 31'21 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.b( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.6( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.19( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.1a( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.c( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.1b( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.1c( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.12( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.e( v 50'19 lc 35'4 (0'0,50'19] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.e( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.f( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.d( v 50'19 lc 35'5 (0'0,50'19] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.17( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.7( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.4( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.15( v 50'19 lc 35'3 (0'0,50'19] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.8( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.b( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.9( v 50'19 lc 35'8 (0'0,50'19] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.10( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.9( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.6( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=52/53 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.1e( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.1d( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.1f( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.18( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.1a( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Feb  1 09:52:37 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Feb  1 09:52:37 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v114: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Feb  1 09:52:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb  1 09:52:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Feb  1 09:52:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  1 09:52:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  1 09:52:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Feb  1 09:52:38 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Feb  1 09:52:38 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb  1 09:52:38 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb  1 09:52:38 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Feb  1 09:52:38 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Feb  1 09:52:38 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event 23ca1801-a40f-405c-bbf0-4b566eca4f29 (Global Recovery Event) in 15 seconds
Feb  1 09:52:38 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.395108223s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 active pruub 107.536727905s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:38 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.395028114s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.536727905s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[6.a( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:38 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.396500587s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 active pruub 107.539016724s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:38 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.396461487s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539016724s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:38 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.396233559s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 active pruub 107.539131165s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:38 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.396144867s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539131165s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:38 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.396371841s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 active pruub 107.539390564s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:38 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.396327019s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539390564s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[6.6( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[6.2( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[6.e( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=49'484 lcod 38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Feb  1 09:52:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  1 09:52:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  1 09:52:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Feb  1 09:52:39 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.421211243s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 active pruub 109.648559570s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.420630455s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.648559570s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.420741081s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 active pruub 109.649414062s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.420618057s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.649414062s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.419396400s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 active pruub 109.648750305s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.419229507s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.648750305s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.418274879s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 active pruub 109.648757935s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.418058395s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.648757935s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=54/55 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.417829514s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 active pruub 109.648933411s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.417583466s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.648933411s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:39 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:39 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:39 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:39 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:39 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:39 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:39 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:39 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:39 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:39 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=54/55 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:39 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[6.e( v 32'39 lc 31'19 (0'0,32'39] local-lis/les=54/55 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:39 np0005604375 python3[98240]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:52:39 np0005604375 podman[98241]: 2026-02-01 14:52:39.360744248 +0000 UTC m=+0.054764255 container create ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664 (image=quay.io/ceph/ceph:v20, name=vigorous_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 09:52:39 np0005604375 systemd[76558]: Starting Mark boot as successful...
Feb  1 09:52:39 np0005604375 systemd[76558]: Finished Mark boot as successful.
Feb  1 09:52:39 np0005604375 systemd[1]: Started libpod-conmon-ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664.scope.
Feb  1 09:52:39 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:52:39 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d4173c4c08e4e6bd12c15158569a4af7f1e36f28b009a5fbd8ea5ab28426d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:52:39 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d4173c4c08e4e6bd12c15158569a4af7f1e36f28b009a5fbd8ea5ab28426d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:52:39 np0005604375 podman[98241]: 2026-02-01 14:52:39.341546327 +0000 UTC m=+0.035566314 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:52:39 np0005604375 podman[98241]: 2026-02-01 14:52:39.437624985 +0000 UTC m=+0.131644992 container init ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664 (image=quay.io/ceph/ceph:v20, name=vigorous_lumiere, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  1 09:52:39 np0005604375 podman[98241]: 2026-02-01 14:52:39.442257056 +0000 UTC m=+0.136277043 container start ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664 (image=quay.io/ceph/ceph:v20, name=vigorous_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  1 09:52:39 np0005604375 podman[98241]: 2026-02-01 14:52:39.445655112 +0000 UTC m=+0.139675109 container attach ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664 (image=quay.io/ceph/ceph:v20, name=vigorous_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 09:52:39 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v117: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:52:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Feb  1 09:52:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb  1 09:52:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Feb  1 09:52:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Feb  1 09:52:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Feb  1 09:52:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb  1 09:52:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb  1 09:52:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  1 09:52:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  1 09:52:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Feb  1 09:52:40 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.411252975s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.649475098s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.411259651s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.649597168s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.411093712s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.649475098s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.411152840s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.649597168s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.13( v 55'484 (0'0,55'484] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.412872314s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 38'483 active pruub 109.651443481s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.13( v 55'484 (0'0,55'484] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.412771225s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 38'483 unknown NOTIFY pruub 109.651443481s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.410440445s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.649291992s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.410350800s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.649291992s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.13( v 55'484 (0'0,55'484] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.13( v 55'484 (0'0,55'484] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.409917831s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.649002075s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.409841537s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.649002075s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.409867287s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.649505615s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.409816742s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.649505615s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=49'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=49'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=12.994630814s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 active pruub 108.235671997s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=12.994583130s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 unknown NOTIFY pruub 108.235671997s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=12.997785568s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 active pruub 108.240180969s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=12.997887611s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 active pruub 108.240409851s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=12.997598648s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 unknown NOTIFY pruub 108.240180969s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=12.997559547s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 unknown NOTIFY pruub 108.240409851s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.408428192s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.651412964s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=13.002901077s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 active pruub 108.245918274s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.408333778s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.651412964s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=13.002845764s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 unknown NOTIFY pruub 108.245918274s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.408208847s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=49'484 lcod 55'485 active pruub 109.651405334s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.408074379s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=49'484 lcod 55'485 unknown NOTIFY pruub 109.651405334s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.405170441s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.648628235s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.405096054s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.648628235s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[6.3( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[6.f( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.407301903s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.651481628s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.407196999s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.651481628s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.404499054s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.648864746s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.404430389s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.648864746s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[6.7( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=55/56 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=55/56 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=55/56 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Feb  1 09:52:40 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Feb  1 09:52:40 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Feb  1 09:52:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Feb  1 09:52:41 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  1 09:52:41 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  1 09:52:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Feb  1 09:52:41 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.13( v 55'484 (0'0,55'484] local-lis/les=56/57 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=55'484 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=56/57 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=55'486 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=56/57 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=56/57 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=56/57 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[6.7( v 32'39 lc 31'21 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=56/57 n=2 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=56/57 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=56/57 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[6.f( v 32'39 lc 31'1 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:41 np0005604375 vigorous_lumiere[98258]: could not fetch user info: no user info saved
Feb  1 09:52:41 np0005604375 systemd[1]: libpod-ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664.scope: Deactivated successfully.
Feb  1 09:52:41 np0005604375 podman[98241]: 2026-02-01 14:52:41.222360575 +0000 UTC m=+1.916380582 container died ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664 (image=quay.io/ceph/ceph:v20, name=vigorous_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:52:41 np0005604375 systemd[1]: var-lib-containers-storage-overlay-d7d4173c4c08e4e6bd12c15158569a4af7f1e36f28b009a5fbd8ea5ab28426d1-merged.mount: Deactivated successfully.
Feb  1 09:52:41 np0005604375 podman[98241]: 2026-02-01 14:52:41.263116334 +0000 UTC m=+1.957136351 container remove ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664 (image=quay.io/ceph/ceph:v20, name=vigorous_lumiere, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:52:41 np0005604375 systemd[1]: libpod-conmon-ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664.scope: Deactivated successfully.
Feb  1 09:52:41 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Feb  1 09:52:41 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Feb  1 09:52:41 np0005604375 python3[98381]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:52:41 np0005604375 podman[98382]: 2026-02-01 14:52:41.595344909 +0000 UTC m=+0.037792676 container create 29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6 (image=quay.io/ceph/ceph:v20, name=busy_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb  1 09:52:41 np0005604375 systemd[1]: Started libpod-conmon-29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6.scope.
Feb  1 09:52:41 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:52:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222832f23944c2d629283b86c55eeebf42691ba138670fd4d903b14c0f5dabd4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:52:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222832f23944c2d629283b86c55eeebf42691ba138670fd4d903b14c0f5dabd4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:52:41 np0005604375 podman[98382]: 2026-02-01 14:52:41.67094231 +0000 UTC m=+0.113390097 container init 29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6 (image=quay.io/ceph/ceph:v20, name=busy_allen, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:52:41 np0005604375 podman[98382]: 2026-02-01 14:52:41.581845879 +0000 UTC m=+0.024293646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  1 09:52:41 np0005604375 podman[98382]: 2026-02-01 14:52:41.676321602 +0000 UTC m=+0.118769409 container start 29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6 (image=quay.io/ceph/ceph:v20, name=busy_allen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:52:41 np0005604375 podman[98382]: 2026-02-01 14:52:41.680090578 +0000 UTC m=+0.122538345 container attach 29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6 (image=quay.io/ceph/ceph:v20, name=busy_allen, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 09:52:41 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v120: 305 pgs: 15 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 12 op/s; 1.5 KiB/s, 2 keys/s, 30 objects/s recovering
Feb  1 09:52:41 np0005604375 busy_allen[98397]: {
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "user_id": "openstack",
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "display_name": "openstack",
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "email": "",
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "suspended": 0,
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "max_buckets": 1000,
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "subusers": [],
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "keys": [
Feb  1 09:52:41 np0005604375 busy_allen[98397]:        {
Feb  1 09:52:41 np0005604375 busy_allen[98397]:            "user": "openstack",
Feb  1 09:52:41 np0005604375 busy_allen[98397]:            "access_key": "HJSLQLIKXTYXGHFHD0W0",
Feb  1 09:52:41 np0005604375 busy_allen[98397]:            "secret_key": "QD2Ghu8DgZL7G7Ajq8urcmkK9esvUbwgihgz5x9I",
Feb  1 09:52:41 np0005604375 busy_allen[98397]:            "active": true,
Feb  1 09:52:41 np0005604375 busy_allen[98397]:            "create_date": "2026-02-01T14:52:41.859752Z"
Feb  1 09:52:41 np0005604375 busy_allen[98397]:        }
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    ],
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "swift_keys": [],
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "caps": [],
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "op_mask": "read, write, delete",
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "default_placement": "",
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "default_storage_class": "",
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "placement_tags": [],
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "bucket_quota": {
Feb  1 09:52:41 np0005604375 busy_allen[98397]:        "enabled": false,
Feb  1 09:52:41 np0005604375 busy_allen[98397]:        "check_on_raw": false,
Feb  1 09:52:41 np0005604375 busy_allen[98397]:        "max_size": -1,
Feb  1 09:52:41 np0005604375 busy_allen[98397]:        "max_size_kb": 0,
Feb  1 09:52:41 np0005604375 busy_allen[98397]:        "max_objects": -1
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    },
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "user_quota": {
Feb  1 09:52:41 np0005604375 busy_allen[98397]:        "enabled": false,
Feb  1 09:52:41 np0005604375 busy_allen[98397]:        "check_on_raw": false,
Feb  1 09:52:41 np0005604375 busy_allen[98397]:        "max_size": -1,
Feb  1 09:52:41 np0005604375 busy_allen[98397]:        "max_size_kb": 0,
Feb  1 09:52:41 np0005604375 busy_allen[98397]:        "max_objects": -1
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    },
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "temp_url_keys": [],
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "type": "rgw",
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "mfa_ids": [],
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "account_id": "",
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "path": "/",
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "create_date": "2026-02-01T14:52:41.859284Z",
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "tags": [],
Feb  1 09:52:41 np0005604375 busy_allen[98397]:    "group_ids": []
Feb  1 09:52:41 np0005604375 busy_allen[98397]: }
Feb  1 09:52:41 np0005604375 busy_allen[98397]: 
Feb  1 09:52:41 np0005604375 systemd[1]: libpod-29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6.scope: Deactivated successfully.
Feb  1 09:52:41 np0005604375 podman[98483]: 2026-02-01 14:52:41.952422865 +0000 UTC m=+0.037532109 container died 29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6 (image=quay.io/ceph/ceph:v20, name=busy_allen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Feb  1 09:52:41 np0005604375 systemd[1]: var-lib-containers-storage-overlay-222832f23944c2d629283b86c55eeebf42691ba138670fd4d903b14c0f5dabd4-merged.mount: Deactivated successfully.
Feb  1 09:52:41 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Feb  1 09:52:41 np0005604375 podman[98483]: 2026-02-01 14:52:41.996748275 +0000 UTC m=+0.081857449 container remove 29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6 (image=quay.io/ceph/ceph:v20, name=busy_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  1 09:52:42 np0005604375 systemd[1]: libpod-conmon-29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6.scope: Deactivated successfully.
Feb  1 09:52:42 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Feb  1 09:52:43 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Feb  1 09:52:43 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Feb  1 09:52:43 np0005604375 ceph-mgr[75469]: [progress INFO root] Writing back 17 completed events
Feb  1 09:52:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  1 09:52:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:52:43 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Feb  1 09:52:43 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Feb  1 09:52:43 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 15 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 8 op/s; 1.1 KiB/s, 1 keys/s, 21 objects/s recovering
Feb  1 09:52:44 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Feb  1 09:52:44 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Feb  1 09:52:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:52:44 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Feb  1 09:52:44 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Feb  1 09:52:45 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Feb  1 09:52:45 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Feb  1 09:52:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:52:45 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Feb  1 09:52:45 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Feb  1 09:52:45 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 15 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 309 B/s wr, 27 op/s; 942 B/s, 1 keys/s, 18 objects/s recovering
Feb  1 09:52:47 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.c scrub starts
Feb  1 09:52:47 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.c scrub ok
Feb  1 09:52:47 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 255 B/s wr, 35 op/s; 861 B/s, 2 keys/s, 16 objects/s recovering
Feb  1 09:52:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Feb  1 09:52:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb  1 09:52:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Feb  1 09:52:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb  1 09:52:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Feb  1 09:52:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  1 09:52:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  1 09:52:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Feb  1 09:52:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb  1 09:52:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb  1 09:52:47 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 58 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=9.117376328s) [1] r=-1 lpr=58 pi=[44,58)/1 crt=32'39 lcod 0'0 active pruub 115.539176941s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:47 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 58 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=9.117288589s) [1] r=-1 lpr=58 pi=[44,58)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 115.539176941s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:47 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 58 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=9.116833687s) [1] r=-1 lpr=58 pi=[44,58)/1 crt=32'39 lcod 0'0 active pruub 115.539482117s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:47 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 58 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=9.116793633s) [1] r=-1 lpr=58 pi=[44,58)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 115.539482117s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:47 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Feb  1 09:52:47 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:47 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:47 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Feb  1 09:52:47 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Feb  1 09:52:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:52:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:52:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:52:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:52:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:52:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:52:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Feb  1 09:52:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Feb  1 09:52:48 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Feb  1 09:52:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  1 09:52:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  1 09:52:48 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 59 pg[6.4( v 32'39 lc 31'15 (0'0,32'39] local-lis/les=58/59 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:48 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 59 pg[6.c( v 32'39 lc 31'17 (0'0,32'39] local-lis/les=58/59 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:49 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 255 B/s wr, 29 op/s; 80 B/s, 1 keys/s, 0 objects/s recovering
Feb  1 09:52:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Feb  1 09:52:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb  1 09:52:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Feb  1 09:52:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb  1 09:52:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Feb  1 09:52:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  1 09:52:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  1 09:52:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Feb  1 09:52:49 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Feb  1 09:52:49 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 60 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=11.297952652s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=32'39 active pruub 116.240661621s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:49 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=11.297438622s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=32'39 active pruub 116.240341187s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:49 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=11.297299385s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=32'39 unknown NOTIFY pruub 116.240341187s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:49 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 60 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=11.297692299s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=32'39 unknown NOTIFY pruub 116.240661621s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb  1 09:52:49 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb  1 09:52:49 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:52:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Feb  1 09:52:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Feb  1 09:52:50 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Feb  1 09:52:50 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  1 09:52:50 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  1 09:52:50 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 61 pg[6.5( v 32'39 lc 31'11 (0'0,32'39] local-lis/les=60/61 n=2 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:50 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 61 pg[6.d( v 32'39 lc 31'13 (0'0,32'39] local-lis/les=60/61 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:51 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Feb  1 09:52:51 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Feb  1 09:52:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Feb  1 09:52:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Feb  1 09:52:51 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 445 B/s, 2 objects/s recovering
Feb  1 09:52:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Feb  1 09:52:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb  1 09:52:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Feb  1 09:52:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb  1 09:52:51 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Feb  1 09:52:51 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Feb  1 09:52:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Feb  1 09:52:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  1 09:52:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  1 09:52:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Feb  1 09:52:51 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Feb  1 09:52:51 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb  1 09:52:51 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb  1 09:52:51 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Feb  1 09:52:52 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Feb  1 09:52:52 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Feb  1 09:52:52 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Feb  1 09:52:52 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  1 09:52:52 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.529041290s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=38'483 lcod 0'0 active pruub 124.060974121s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.528965950s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 124.060974121s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.529239655s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=57'488 lcod 57'488 active pruub 124.062179565s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.529177666s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=57'488 lcod 57'488 unknown NOTIFY pruub 124.062179565s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:53 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62) [2] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.531764984s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=38'483 lcod 0'0 active pruub 124.065437317s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.531714439s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 124.065437317s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.531764030s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=56'484 lcod 56'484 active pruub 124.065750122s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.531688690s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=56'484 lcod 56'484 unknown NOTIFY pruub 124.065750122s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:53 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62) [2] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:53 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62) [2] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:53 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62) [2] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:53 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 361 B/s, 1 objects/s recovering
Feb  1 09:52:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Feb  1 09:52:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb  1 09:52:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Feb  1 09:52:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb  1 09:52:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Feb  1 09:52:53 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb  1 09:52:53 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb  1 09:52:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  1 09:52:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  1 09:52:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Feb  1 09:52:53 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Feb  1 09:52:53 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:53 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:53 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:53 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:53 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:53 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:53 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:53 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=57'488 lcod 57'488 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=57'488 lcod 57'488 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=56'484 lcod 56'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=56'484 lcod 56'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:53 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928535461s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=57'486 lcod 57'486 active pruub 123.753288269s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928648949s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=57'484 lcod 57'484 active pruub 123.753433228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928591728s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=57'484 lcod 57'484 unknown NOTIFY pruub 123.753433228s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928303719s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=57'486 lcod 57'486 unknown NOTIFY pruub 123.753288269s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928470612s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=57'484 lcod 57'484 active pruub 123.753669739s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928447723s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=57'484 lcod 57'484 unknown NOTIFY pruub 123.753669739s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928160667s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=38'483 active pruub 123.753845215s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928115845s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=38'483 unknown NOTIFY pruub 123.753845215s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:54 np0005604375 systemd-logind[786]: New session 33 of user zuul.
Feb  1 09:52:54 np0005604375 systemd[1]: Started Session 33 of User zuul.
Feb  1 09:52:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Feb  1 09:52:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  1 09:52:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  1 09:52:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Feb  1 09:52:54 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=57'484 lcod 57'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=57'484 lcod 57'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=57'484 lcod 57'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=57'484 lcod 57'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:54 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 64 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=63/64 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[48,63)/1 crt=57'489 lcod 57'488 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:54 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 64 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=63/64 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[48,63)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:54 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 64 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=63/64 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[48,63)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:54 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 64 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=63/64 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[48,63)/1 crt=57'485 lcod 56'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:52:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Feb  1 09:52:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Feb  1 09:52:55 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Feb  1 09:52:55 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=0/0 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 pct=0'0 crt=57'489 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:55 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=0/0 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=57'489 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:55 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:55 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:55 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:55 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:55 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 pct=0'0 crt=57'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:55 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=57'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=63/64 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.497686386s) [2] async=[2] r=-1 lpr=65 pi=[48,65)/1 crt=57'485 lcod 56'484 active pruub 126.046806335s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=63/64 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.497288704s) [2] async=[2] r=-1 lpr=65 pi=[48,65)/1 crt=38'483 lcod 0'0 active pruub 126.046455383s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=63/64 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.497209549s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 126.046455383s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=63/64 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.497092247s) [2] async=[2] r=-1 lpr=65 pi=[48,65)/1 crt=57'489 lcod 57'488 active pruub 126.046386719s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=63/64 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.496965408s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=57'489 lcod 57'488 unknown NOTIFY pruub 126.046386719s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=63/64 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.496891022s) [2] async=[2] r=-1 lpr=65 pi=[48,65)/1 crt=38'483 lcod 0'0 active pruub 126.046516418s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=63/64 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.496793747s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 126.046516418s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=63/64 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.495891571s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=57'485 lcod 56'484 unknown NOTIFY pruub 126.046806335s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:55 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 65 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=64/65 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[56,64)/1 crt=57'487 lcod 57'486 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:55 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 65 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=64/65 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[56,64)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:55 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 65 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=64/65 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[56,64)/1 crt=57'485 lcod 57'484 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:55 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 65 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=64/65 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[56,64)/1 crt=57'485 lcod 57'484 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:55 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 4 peering, 301 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 207 B/s, 5 objects/s recovering
Feb  1 09:52:55 np0005604375 python3.9[98651]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:52:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Feb  1 09:52:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Feb  1 09:52:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Feb  1 09:52:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Feb  1 09:52:56 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=0/0 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 pct=0'0 crt=57'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 pct=0'0 crt=57'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=0/0 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=57'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:56 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=64/65 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.995784760s) [2] async=[2] r=-1 lpr=66 pi=[56,66)/1 crt=57'487 lcod 57'486 active pruub 130.059646606s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:56 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=64/65 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.995539665s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=57'487 lcod 57'486 unknown NOTIFY pruub 130.059646606s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:56 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=64/65 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.995609283s) [2] async=[2] r=-1 lpr=66 pi=[56,66)/1 crt=57'485 lcod 57'484 active pruub 130.059814453s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:56 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=64/65 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.995502472s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=57'485 lcod 57'484 unknown NOTIFY pruub 130.059814453s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:56 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=64/65 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.995087624s) [2] async=[2] r=-1 lpr=66 pi=[56,66)/1 crt=57'485 lcod 57'484 active pruub 130.059875488s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:56 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=64/65 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.994912148s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=57'485 lcod 57'484 unknown NOTIFY pruub 130.059875488s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:56 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=64/65 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.994610786s) [2] async=[2] r=-1 lpr=66 pi=[56,66)/1 crt=38'483 active pruub 130.059753418s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:56 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=64/65 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.994564056s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=38'483 unknown NOTIFY pruub 130.059753418s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=0/0 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 pct=0'0 crt=57'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=0/0 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=57'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=57'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=65/66 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=65/66 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=65/66 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=57'489 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:56 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=57'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:56 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Feb  1 09:52:56 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Feb  1 09:52:56 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Feb  1 09:52:56 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Feb  1 09:52:57 np0005604375 python3.9[98869]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:52:57 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Feb  1 09:52:57 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Feb  1 09:52:57 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Feb  1 09:52:57 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 67 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=66/67 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=57'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:57 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 67 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=66/67 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=57'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:57 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 67 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=66/67 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=57'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:57 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 67 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:52:57 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v138: 305 pgs: 4 active+remapped, 4 peering, 297 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 519 B/s, 12 objects/s recovering
Feb  1 09:52:58 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Feb  1 09:52:58 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Feb  1 09:52:58 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Feb  1 09:52:58 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Feb  1 09:52:59 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v139: 305 pgs: 4 active+remapped, 4 peering, 297 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 413 B/s, 9 objects/s recovering
Feb  1 09:53:00 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Feb  1 09:53:00 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:53:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:53:01 np0005604375 podman[99035]: 2026-02-01 14:53:01.161779873 +0000 UTC m=+0.050160849 container create c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:53:01 np0005604375 systemd[1]: Started libpod-conmon-c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110.scope.
Feb  1 09:53:01 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:53:01 np0005604375 podman[99035]: 2026-02-01 14:53:01.140780168 +0000 UTC m=+0.029161234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:53:01 np0005604375 podman[99035]: 2026-02-01 14:53:01.240412337 +0000 UTC m=+0.128793333 container init c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  1 09:53:01 np0005604375 podman[99035]: 2026-02-01 14:53:01.247509451 +0000 UTC m=+0.135890457 container start c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:53:01 np0005604375 podman[99035]: 2026-02-01 14:53:01.251171085 +0000 UTC m=+0.139552061 container attach c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Feb  1 09:53:01 np0005604375 vibrant_booth[99054]: 167 167
Feb  1 09:53:01 np0005604375 systemd[1]: libpod-c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110.scope: Deactivated successfully.
Feb  1 09:53:01 np0005604375 podman[99035]: 2026-02-01 14:53:01.254026001 +0000 UTC m=+0.142406977 container died c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:53:01 np0005604375 systemd[1]: var-lib-containers-storage-overlay-47a2575bf4a52dcbe3db78aaf9b1dd258ffa549b31e9af8fb02cfe859d503eaa-merged.mount: Deactivated successfully.
Feb  1 09:53:01 np0005604375 podman[99035]: 2026-02-01 14:53:01.303616116 +0000 UTC m=+0.191997092 container remove c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_booth, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:53:01 np0005604375 systemd[1]: libpod-conmon-c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110.scope: Deactivated successfully.
Feb  1 09:53:01 np0005604375 podman[99079]: 2026-02-01 14:53:01.441337203 +0000 UTC m=+0.047729442 container create f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:53:01 np0005604375 systemd[1]: Started libpod-conmon-f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87.scope.
Feb  1 09:53:01 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:53:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735bce9c2cf774df9ef027a6909ec50fcb3170edaa4484679fddba6428684f3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:53:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735bce9c2cf774df9ef027a6909ec50fcb3170edaa4484679fddba6428684f3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:53:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735bce9c2cf774df9ef027a6909ec50fcb3170edaa4484679fddba6428684f3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:53:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735bce9c2cf774df9ef027a6909ec50fcb3170edaa4484679fddba6428684f3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:53:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735bce9c2cf774df9ef027a6909ec50fcb3170edaa4484679fddba6428684f3c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:53:01 np0005604375 podman[99079]: 2026-02-01 14:53:01.426684155 +0000 UTC m=+0.033076414 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:53:01 np0005604375 podman[99079]: 2026-02-01 14:53:01.542199011 +0000 UTC m=+0.148591300 container init f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:53:01 np0005604375 podman[99079]: 2026-02-01 14:53:01.558922237 +0000 UTC m=+0.165314476 container start f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sutherland, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:53:01 np0005604375 podman[99079]: 2026-02-01 14:53:01.562220243 +0000 UTC m=+0.168612582 container attach f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:53:01 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v140: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 316 B/s, 7 objects/s recovering
Feb  1 09:53:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Feb  1 09:53:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb  1 09:53:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Feb  1 09:53:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb  1 09:53:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Feb  1 09:53:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb  1 09:53:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb  1 09:53:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  1 09:53:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  1 09:53:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Feb  1 09:53:01 np0005604375 magical_sutherland[99096]: --> passed data devices: 0 physical, 3 LVM
Feb  1 09:53:01 np0005604375 magical_sutherland[99096]: --> All data devices are unavailable
Feb  1 09:53:01 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 68 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=14.963678360s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=38'483 lcod 0'0 active pruub 132.062423706s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:01 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 68 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=14.963602066s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 132.062423706s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:01 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 68 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=14.966928482s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=57'486 lcod 57'486 active pruub 132.065948486s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:01 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 68 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=14.966861725s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=57'486 lcod 57'486 unknown NOTIFY pruub 132.065948486s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:01 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Feb  1 09:53:01 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:01 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:02 np0005604375 podman[99079]: 2026-02-01 14:53:02.014464318 +0000 UTC m=+0.620856557 container died f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb  1 09:53:02 np0005604375 systemd[1]: libpod-f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87.scope: Deactivated successfully.
Feb  1 09:53:02 np0005604375 systemd[1]: var-lib-containers-storage-overlay-735bce9c2cf774df9ef027a6909ec50fcb3170edaa4484679fddba6428684f3c-merged.mount: Deactivated successfully.
Feb  1 09:53:02 np0005604375 podman[99079]: 2026-02-01 14:53:02.052380293 +0000 UTC m=+0.658772532 container remove f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:53:02 np0005604375 systemd[1]: libpod-conmon-f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87.scope: Deactivated successfully.
Feb  1 09:53:02 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 68 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=68 pruub=10.513087273s) [2] r=-1 lpr=68 pi=[44,68)/1 crt=32'39 lcod 0'0 active pruub 131.539382935s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:02 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 68 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=68 pruub=10.513002396s) [2] r=-1 lpr=68 pi=[44,68)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 131.539382935s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:02 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 68 pg[6.8( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=68) [2] r=0 lpr=68 pi=[44,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:02 np0005604375 podman[99189]: 2026-02-01 14:53:02.532138553 +0000 UTC m=+0.067314094 container create 74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:53:02 np0005604375 systemd[1]: Started libpod-conmon-74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a.scope.
Feb  1 09:53:02 np0005604375 podman[99189]: 2026-02-01 14:53:02.499880569 +0000 UTC m=+0.035056200 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:53:02 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:53:02 np0005604375 podman[99189]: 2026-02-01 14:53:02.62257158 +0000 UTC m=+0.157747141 container init 74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:53:02 np0005604375 podman[99189]: 2026-02-01 14:53:02.629646553 +0000 UTC m=+0.164822124 container start 74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:53:02 np0005604375 podman[99189]: 2026-02-01 14:53:02.632576331 +0000 UTC m=+0.167751902 container attach 74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:53:02 np0005604375 stupefied_saha[99206]: 167 167
Feb  1 09:53:02 np0005604375 systemd[1]: libpod-74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a.scope: Deactivated successfully.
Feb  1 09:53:02 np0005604375 conmon[99206]: conmon 74027cc61e818bc9f17b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a.scope/container/memory.events
Feb  1 09:53:02 np0005604375 podman[99189]: 2026-02-01 14:53:02.636765117 +0000 UTC m=+0.171940678 container died 74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_saha, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  1 09:53:02 np0005604375 systemd[1]: var-lib-containers-storage-overlay-751ea7f5cb9a90b1e009f3f8d973e0af11a2f34e28f206c7e151f025f5aee037-merged.mount: Deactivated successfully.
Feb  1 09:53:02 np0005604375 podman[99189]: 2026-02-01 14:53:02.683172208 +0000 UTC m=+0.218347749 container remove 74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_saha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:53:02 np0005604375 systemd[1]: libpod-conmon-74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a.scope: Deactivated successfully.
Feb  1 09:53:02 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Feb  1 09:53:02 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Feb  1 09:53:02 np0005604375 podman[99230]: 2026-02-01 14:53:02.824692374 +0000 UTC m=+0.040371633 container create 68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_beaver, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  1 09:53:02 np0005604375 systemd[1]: Started libpod-conmon-68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9.scope.
Feb  1 09:53:02 np0005604375 podman[99230]: 2026-02-01 14:53:02.8076324 +0000 UTC m=+0.023311639 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:53:02 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:53:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e258c02ff5051181ba60341e405f9816f8dd45feb283fab6812cfe6f273b49b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:53:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e258c02ff5051181ba60341e405f9816f8dd45feb283fab6812cfe6f273b49b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:53:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e258c02ff5051181ba60341e405f9816f8dd45feb283fab6812cfe6f273b49b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:53:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e258c02ff5051181ba60341e405f9816f8dd45feb283fab6812cfe6f273b49b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:53:02 np0005604375 podman[99230]: 2026-02-01 14:53:02.954045268 +0000 UTC m=+0.169724537 container init 68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:53:02 np0005604375 podman[99230]: 2026-02-01 14:53:02.962410762 +0000 UTC m=+0.178090011 container start 68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:53:02 np0005604375 podman[99230]: 2026-02-01 14:53:02.966266841 +0000 UTC m=+0.181946130 container attach 68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_beaver, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:53:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Feb  1 09:53:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Feb  1 09:53:02 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Feb  1 09:53:02 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[48,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:02 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[48,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:02 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[48,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:02 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[48,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:02 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  1 09:53:02 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  1 09:53:02 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 69 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=68/69 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=68) [2] r=0 lpr=68 pi=[44,68)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:02 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 69 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=0 lpr=69 pi=[48,69)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:02 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 69 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=0 lpr=69 pi=[48,69)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:02 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 69 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=0 lpr=69 pi=[48,69)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:02 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 69 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=0 lpr=69 pi=[48,69)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]: {
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:    "0": [
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:        {
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "devices": [
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "/dev/loop3"
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            ],
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_name": "ceph_lv0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_size": "21470642176",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "name": "ceph_lv0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "tags": {
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.cluster_name": "ceph",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.crush_device_class": "",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.encrypted": "0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.objectstore": "bluestore",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.osd_id": "0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.type": "block",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.vdo": "0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.with_tpm": "0"
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            },
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "type": "block",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "vg_name": "ceph_vg0"
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:        }
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:    ],
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:    "1": [
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:        {
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "devices": [
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "/dev/loop4"
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            ],
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_name": "ceph_lv1",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_size": "21470642176",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "name": "ceph_lv1",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "tags": {
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.cluster_name": "ceph",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.crush_device_class": "",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.encrypted": "0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.objectstore": "bluestore",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.osd_id": "1",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.type": "block",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.vdo": "0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.with_tpm": "0"
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            },
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "type": "block",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "vg_name": "ceph_vg1"
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:        }
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:    ],
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:    "2": [
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:        {
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "devices": [
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "/dev/loop5"
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            ],
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_name": "ceph_lv2",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_size": "21470642176",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "name": "ceph_lv2",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "tags": {
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.cluster_name": "ceph",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.crush_device_class": "",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.encrypted": "0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.objectstore": "bluestore",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.osd_id": "2",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.type": "block",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.vdo": "0",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:                "ceph.with_tpm": "0"
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            },
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "type": "block",
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:            "vg_name": "ceph_vg2"
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:        }
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]:    ]
Feb  1 09:53:03 np0005604375 sharp_beaver[99247]: }
Feb  1 09:53:03 np0005604375 systemd[1]: libpod-68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9.scope: Deactivated successfully.
Feb  1 09:53:03 np0005604375 podman[99230]: 2026-02-01 14:53:03.270434389 +0000 UTC m=+0.486113648 container died 68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  1 09:53:03 np0005604375 systemd[1]: var-lib-containers-storage-overlay-1e258c02ff5051181ba60341e405f9816f8dd45feb283fab6812cfe6f273b49b-merged.mount: Deactivated successfully.
Feb  1 09:53:03 np0005604375 podman[99230]: 2026-02-01 14:53:03.31468577 +0000 UTC m=+0.530364999 container remove 68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_beaver, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:53:03 np0005604375 systemd[1]: libpod-conmon-68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9.scope: Deactivated successfully.
Feb  1 09:53:03 np0005604375 podman[99337]: 2026-02-01 14:53:03.723810901 +0000 UTC m=+0.051644483 container create 4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_neumann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:53:03 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:53:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Feb  1 09:53:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb  1 09:53:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Feb  1 09:53:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb  1 09:53:03 np0005604375 systemd[1]: Started libpod-conmon-4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a.scope.
Feb  1 09:53:03 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Feb  1 09:53:03 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Feb  1 09:53:03 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:53:03 np0005604375 podman[99337]: 2026-02-01 14:53:03.698141128 +0000 UTC m=+0.025974780 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:53:03 np0005604375 podman[99337]: 2026-02-01 14:53:03.797836799 +0000 UTC m=+0.125670411 container init 4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_neumann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:53:03 np0005604375 podman[99337]: 2026-02-01 14:53:03.803166302 +0000 UTC m=+0.130999874 container start 4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_neumann, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:53:03 np0005604375 xenodochial_neumann[99353]: 167 167
Feb  1 09:53:03 np0005604375 systemd[1]: libpod-4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a.scope: Deactivated successfully.
Feb  1 09:53:03 np0005604375 podman[99337]: 2026-02-01 14:53:03.808805772 +0000 UTC m=+0.136639434 container attach 4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_neumann, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  1 09:53:03 np0005604375 podman[99337]: 2026-02-01 14:53:03.809426606 +0000 UTC m=+0.137260198 container died 4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_neumann, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:53:03 np0005604375 systemd[1]: var-lib-containers-storage-overlay-69967fd5be47176ac06a186c2c31a140cec5aa3afde7f01166261ce773d92215-merged.mount: Deactivated successfully.
Feb  1 09:53:03 np0005604375 podman[99337]: 2026-02-01 14:53:03.845716643 +0000 UTC m=+0.173550215 container remove 4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:53:03 np0005604375 systemd[1]: libpod-conmon-4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a.scope: Deactivated successfully.
Feb  1 09:53:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Feb  1 09:53:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  1 09:53:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  1 09:53:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Feb  1 09:53:04 np0005604375 podman[99401]: 2026-02-01 14:53:04.031743026 +0000 UTC m=+0.100132522 container create 02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  1 09:53:04 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Feb  1 09:53:04 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 70 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=70 pruub=13.064660072s) [0] r=-1 lpr=70 pi=[52,70)/1 crt=32'39 lcod 0'0 active pruub 132.241012573s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb  1 09:53:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb  1 09:53:04 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 70 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=70 pruub=13.064594269s) [0] r=-1 lpr=70 pi=[52,70)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 132.241012573s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:04 np0005604375 podman[99401]: 2026-02-01 14:53:03.977460313 +0000 UTC m=+0.045849859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:53:04 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 70 pg[6.9( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:04 np0005604375 systemd[1]: Started libpod-conmon-02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2.scope.
Feb  1 09:53:04 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:53:04 np0005604375 systemd[1]: session-33.scope: Deactivated successfully.
Feb  1 09:53:04 np0005604375 systemd[1]: session-33.scope: Consumed 7.757s CPU time.
Feb  1 09:53:04 np0005604375 systemd-logind[786]: Session 33 logged out. Waiting for processes to exit.
Feb  1 09:53:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3b725ae19e7de63b1731e14ae2b73f1016fab5b080b589be953b76e7a5ec66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:53:04 np0005604375 systemd-logind[786]: Removed session 33.
Feb  1 09:53:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3b725ae19e7de63b1731e14ae2b73f1016fab5b080b589be953b76e7a5ec66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:53:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3b725ae19e7de63b1731e14ae2b73f1016fab5b080b589be953b76e7a5ec66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:53:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3b725ae19e7de63b1731e14ae2b73f1016fab5b080b589be953b76e7a5ec66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:53:04 np0005604375 podman[99401]: 2026-02-01 14:53:04.148629073 +0000 UTC m=+0.217018619 container init 02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  1 09:53:04 np0005604375 podman[99401]: 2026-02-01 14:53:04.158267385 +0000 UTC m=+0.226656851 container start 02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:53:04 np0005604375 podman[99401]: 2026-02-01 14:53:04.161361227 +0000 UTC m=+0.229750783 container attach 02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  1 09:53:04 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 70 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=69/70 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[48,69)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:04 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 70 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=69/70 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[48,69)/1 crt=57'487 lcod 57'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:04 np0005604375 lvm[99495]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:53:04 np0005604375 lvm[99495]: VG ceph_vg0 finished
Feb  1 09:53:04 np0005604375 lvm[99498]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:53:04 np0005604375 lvm[99498]: VG ceph_vg1 finished
Feb  1 09:53:04 np0005604375 lvm[99500]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:53:04 np0005604375 lvm[99500]: VG ceph_vg2 finished
Feb  1 09:53:04 np0005604375 exciting_diffie[99418]: {}
Feb  1 09:53:04 np0005604375 systemd[1]: libpod-02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2.scope: Deactivated successfully.
Feb  1 09:53:04 np0005604375 systemd[1]: libpod-02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2.scope: Consumed 1.169s CPU time.
Feb  1 09:53:04 np0005604375 podman[99401]: 2026-02-01 14:53:04.986408223 +0000 UTC m=+1.054797719 container died 02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  1 09:53:05 np0005604375 systemd[1]: var-lib-containers-storage-overlay-ce3b725ae19e7de63b1731e14ae2b73f1016fab5b080b589be953b76e7a5ec66-merged.mount: Deactivated successfully.
Feb  1 09:53:05 np0005604375 podman[99401]: 2026-02-01 14:53:05.034990504 +0000 UTC m=+1.103380000 container remove 02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_diffie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  1 09:53:05 np0005604375 systemd[1]: libpod-conmon-02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2.scope: Deactivated successfully.
Feb  1 09:53:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Feb  1 09:53:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Feb  1 09:53:05 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Feb  1 09:53:05 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 71 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:05 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 71 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:05 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 71 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 pct=0'0 crt=57'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:05 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 71 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 crt=57'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:05 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 71 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=69/70 n=6 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71 pruub=15.237437248s) [2] async=[2] r=-1 lpr=71 pi=[48,71)/1 crt=57'487 lcod 57'486 active pruub 135.434600830s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:05 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 71 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=69/70 n=6 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71 pruub=15.237270355s) [2] r=-1 lpr=71 pi=[48,71)/1 crt=57'487 lcod 57'486 unknown NOTIFY pruub 135.434600830s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:05 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 71 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=69/70 n=7 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71 pruub=15.236918449s) [2] async=[2] r=-1 lpr=71 pi=[48,71)/1 crt=38'483 lcod 0'0 active pruub 135.434539795s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:05 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  1 09:53:05 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  1 09:53:05 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 71 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=69/70 n=7 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71 pruub=15.236736298s) [2] r=-1 lpr=71 pi=[48,71)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 135.434539795s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:05 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 71 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=70/71 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:53:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:53:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:53:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:53:05 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Feb  1 09:53:05 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Feb  1 09:53:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:53:05 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 131 B/s, 3 objects/s recovering
Feb  1 09:53:05 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Feb  1 09:53:05 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Feb  1 09:53:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Feb  1 09:53:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Feb  1 09:53:06 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Feb  1 09:53:06 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:53:06 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:53:06 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 72 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=71/72 n=7 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:06 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 72 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=71/72 n=6 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 crt=57'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:06 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.a scrub starts
Feb  1 09:53:06 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.a scrub ok
Feb  1 09:53:07 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Feb  1 09:53:07 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Feb  1 09:53:07 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 110 B/s, 2 objects/s recovering
Feb  1 09:53:07 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Feb  1 09:53:07 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Feb  1 09:53:08 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Feb  1 09:53:08 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Feb  1 09:53:08 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Feb  1 09:53:08 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Feb  1 09:53:09 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 2 objects/s recovering
Feb  1 09:53:09 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Feb  1 09:53:09 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Feb  1 09:53:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:53:10 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Feb  1 09:53:10 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Feb  1 09:53:11 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 1 objects/s recovering
Feb  1 09:53:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Feb  1 09:53:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb  1 09:53:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Feb  1 09:53:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb  1 09:53:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Feb  1 09:53:12 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb  1 09:53:12 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb  1 09:53:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  1 09:53:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  1 09:53:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Feb  1 09:53:12 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Feb  1 09:53:12 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 73 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=44/21 lis/c=54/54 les/c/f=55/55/0 sis=73 pruub=14.500942230s) [0] r=-1 lpr=73 pi=[54,73)/1 crt=32'39 lcod 0'0 active pruub 142.235549927s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:12 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 73 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=44/21 lis/c=54/54 les/c/f=55/55/0 sis=73 pruub=14.500884056s) [0] r=-1 lpr=73 pi=[54,73)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 142.235549927s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:12 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 73 pg[6.a( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=54/54 les/c/f=55/55/0 sis=73) [0] r=0 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:13 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Feb  1 09:53:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  1 09:53:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  1 09:53:13 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Feb  1 09:53:13 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Feb  1 09:53:13 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 74 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=73/74 n=1 ec=44/21 lis/c=54/54 les/c/f=55/55/0 sis=73) [0] r=0 lpr=73 pi=[54,73)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:13 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.e scrub starts
Feb  1 09:53:13 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.e scrub ok
Feb  1 09:53:13 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:53:13 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Feb  1 09:53:13 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb  1 09:53:13 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Feb  1 09:53:13 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb  1 09:53:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Feb  1 09:53:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  1 09:53:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  1 09:53:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Feb  1 09:53:14 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb  1 09:53:14 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb  1 09:53:14 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Feb  1 09:53:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  1 09:53:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  1 09:53:15 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Feb  1 09:53:15 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Feb  1 09:53:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:53:15 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:53:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Feb  1 09:53:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb  1 09:53:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Feb  1 09:53:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb  1 09:53:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Feb  1 09:53:16 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  1 09:53:16 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  1 09:53:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Feb  1 09:53:16 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Feb  1 09:53:16 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 75 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=75 pruub=12.965231895s) [1] r=-1 lpr=75 pi=[56,75)/1 crt=32'39 active pruub 147.773178101s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:16 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 76 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=75 pruub=12.965162277s) [1] r=-1 lpr=75 pi=[56,75)/1 crt=32'39 unknown NOTIFY pruub 147.773178101s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:16 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb  1 09:53:16 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb  1 09:53:16 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 76 pg[6.b( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=75) [1] r=0 lpr=76 pi=[56,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:16 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 76 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=76 pruub=8.747318268s) [2] r=-1 lpr=76 pi=[48,76)/1 crt=38'483 lcod 0'0 active pruub 140.062316895s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:16 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 76 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=76 pruub=8.747279167s) [2] r=-1 lpr=76 pi=[48,76)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 140.062316895s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:16 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 76 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=76 pruub=8.750363350s) [2] r=-1 lpr=76 pi=[48,76)/1 crt=57'486 lcod 57'486 active pruub 140.066101074s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:16 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 76 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=76 pruub=8.750317574s) [2] r=-1 lpr=76 pi=[48,76)/1 crt=57'486 lcod 57'486 unknown NOTIFY pruub 140.066101074s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:16 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=76) [2] r=0 lpr=76 pi=[48,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:16 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=76) [2] r=0 lpr=76 pi=[48,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:16 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Feb  1 09:53:16 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Feb  1 09:53:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Feb  1 09:53:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Feb  1 09:53:17 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Feb  1 09:53:17 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 77 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=0 lpr=77 pi=[48,77)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:17 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 77 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=0 lpr=77 pi=[48,77)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:17 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 77 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=0 lpr=77 pi=[48,77)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:17 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 77 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=0 lpr=77 pi=[48,77)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:17 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[48,77)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:17 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[48,77)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:17 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[48,77)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:17 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[48,77)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:17 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  1 09:53:17 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  1 09:53:17 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 77 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=75/77 n=1 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=75) [1] r=0 lpr=76 pi=[56,75)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:53:17
Feb  1 09:53:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 09:53:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 09:53:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'backups']
Feb  1 09:53:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 09:53:17 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:53:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Feb  1 09:53:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb  1 09:53:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Feb  1 09:53:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb  1 09:53:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Feb  1 09:53:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  1 09:53:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  1 09:53:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Feb  1 09:53:18 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Feb  1 09:53:18 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 78 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=77/78 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[48,77)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb  1 09:53:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb  1 09:53:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  1 09:53:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  1 09:53:18 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 78 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=77/78 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[48,77)/1 crt=57'487 lcod 57'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:53:18 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 78 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=60/61 n=1 ec=44/21 lis/c=60/60 les/c/f=61/61/0 sis=78 pruub=12.373157501s) [1] r=-1 lpr=78 pi=[60,78)/1 crt=32'39 active pruub 149.470657349s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:18 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 78 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=60/61 n=1 ec=44/21 lis/c=60/60 les/c/f=61/61/0 sis=78 pruub=12.372897148s) [1] r=-1 lpr=78 pi=[60,78)/1 crt=32'39 unknown NOTIFY pruub 149.470657349s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:53:18 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=60/60 les/c/f=61/61/0 sis=78) [1] r=0 lpr=78 pi=[60,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:53:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:53:18 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Feb  1 09:53:18 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Feb  1 09:53:19 np0005604375 systemd-logind[786]: New session 34 of user zuul.
Feb  1 09:53:19 np0005604375 systemd[1]: Started Session 34 of User zuul.
Feb  1 09:53:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Feb  1 09:53:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Feb  1 09:53:19 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Feb  1 09:53:19 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 79 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=77/78 n=7 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79 pruub=14.960889816s) [2] async=[2] r=-1 lpr=79 pi=[48,79)/1 crt=38'483 lcod 0'0 active pruub 149.339111328s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:19 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 79 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=77/78 n=7 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79 pruub=14.960788727s) [2] r=-1 lpr=79 pi=[48,79)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 149.339111328s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:19 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 79 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=77/78 n=6 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79 pruub=14.966958046s) [2] async=[2] r=-1 lpr=79 pi=[48,79)/1 crt=57'487 lcod 57'486 active pruub 149.346572876s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:19 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 79 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=77/78 n=6 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79 pruub=14.966865540s) [2] r=-1 lpr=79 pi=[48,79)/1 crt=57'487 lcod 57'486 unknown NOTIFY pruub 149.346572876s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:19 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 79 pg[6.d( v 32'39 lc 31'13 (0'0,32'39] local-lis/les=78/79 n=1 ec=44/21 lis/c=60/60 les/c/f=61/61/0 sis=78) [1] r=0 lpr=78 pi=[60,78)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:19 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 79 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 pct=0'0 crt=57'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:19 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 79 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 crt=57'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:19 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 79 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:19 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 79 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:19 np0005604375 python3.9[99692]: ansible-ansible.legacy.ping Invoked with data=pong
Feb  1 09:53:19 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:53:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Feb  1 09:53:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb  1 09:53:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Feb  1 09:53:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb  1 09:53:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Feb  1 09:53:20 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb  1 09:53:20 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb  1 09:53:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  1 09:53:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  1 09:53:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Feb  1 09:53:20 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Feb  1 09:53:20 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 80 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=79/80 n=7 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:20 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 80 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 crt=57'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:20 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.a scrub starts
Feb  1 09:53:20 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.a scrub ok
Feb  1 09:53:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:53:20 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Feb  1 09:53:20 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Feb  1 09:53:20 np0005604375 python3.9[99866]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:53:21 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  1 09:53:21 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  1 09:53:21 np0005604375 python3.9[100022]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:53:21 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 3 objects/s recovering
Feb  1 09:53:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Feb  1 09:53:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb  1 09:53:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Feb  1 09:53:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb  1 09:53:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Feb  1 09:53:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb  1 09:53:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb  1 09:53:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  1 09:53:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  1 09:53:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Feb  1 09:53:22 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Feb  1 09:53:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 81 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=81 pruub=14.861714363s) [2] r=-1 lpr=81 pi=[56,81)/1 crt=32'39 active pruub 155.773696899s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:22 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 81 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=81 pruub=14.861543655s) [2] r=-1 lpr=81 pi=[56,81)/1 crt=32'39 unknown NOTIFY pruub 155.773696899s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=81) [2] r=0 lpr=81 pi=[56,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:22 np0005604375 python3.9[100175]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:53:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Feb  1 09:53:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  1 09:53:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  1 09:53:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Feb  1 09:53:23 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Feb  1 09:53:23 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 82 pg[6.f( v 32'39 lc 31'1 (0'0,32'39] local-lis/les=81/82 n=1 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=81) [2] r=0 lpr=81 pi=[56,81)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:23 np0005604375 python3.9[100329]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:53:23 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.b scrub starts
Feb  1 09:53:23 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.b scrub ok
Feb  1 09:53:23 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 108 B/s, 3 objects/s recovering
Feb  1 09:53:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Feb  1 09:53:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb  1 09:53:24 np0005604375 python3.9[100481]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:53:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Feb  1 09:53:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb  1 09:53:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Feb  1 09:53:24 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Feb  1 09:53:24 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb  1 09:53:24 np0005604375 python3.9[100631]: ansible-ansible.builtin.service_facts Invoked
Feb  1 09:53:24 np0005604375 network[100648]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  1 09:53:24 np0005604375 network[100649]: 'network-scripts' will be removed from distribution in near future.
Feb  1 09:53:24 np0005604375 network[100650]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  1 09:53:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb  1 09:53:25 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.c scrub starts
Feb  1 09:53:25 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.c scrub ok
Feb  1 09:53:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:53:25 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 201 B/s, 3 objects/s recovering
Feb  1 09:53:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Feb  1 09:53:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb  1 09:53:25 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Feb  1 09:53:25 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Feb  1 09:53:26 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.b scrub starts
Feb  1 09:53:26 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.b scrub ok
Feb  1 09:53:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Feb  1 09:53:26 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb  1 09:53:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Feb  1 09:53:26 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb  1 09:53:26 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Feb  1 09:53:27 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb  1 09:53:27 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.c scrub starts
Feb  1 09:53:27 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.c scrub ok
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 112 B/s, 0 objects/s recovering
Feb  1 09:53:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Feb  1 09:53:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb  1 09:53:27 np0005604375 python3.9[100910]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7614284514635656e-06 of space, bias 4.0, pg target 0.0021137141417562786 quantized to 16 (current 16)
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:53:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 09:53:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Feb  1 09:53:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb  1 09:53:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Feb  1 09:53:28 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Feb  1 09:53:28 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb  1 09:53:28 np0005604375 python3.9[101060]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:53:28 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Feb  1 09:53:28 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Feb  1 09:53:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb  1 09:53:29 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Feb  1 09:53:29 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Feb  1 09:53:29 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb  1 09:53:29 np0005604375 python3.9[101214]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:53:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Feb  1 09:53:30 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb  1 09:53:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Feb  1 09:53:30 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Feb  1 09:53:30 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb  1 09:53:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:53:30 np0005604375 python3.9[101372]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:53:31 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb  1 09:53:31 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 86 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=86 pruub=13.644948006s) [2] r=-1 lpr=86 pi=[56,86)/1 crt=55'484 lcod 55'484 active pruub 163.773727417s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:31 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 86 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=86 pruub=13.644883156s) [2] r=-1 lpr=86 pi=[56,86)/1 crt=55'484 lcod 55'484 unknown NOTIFY pruub 163.773727417s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:31 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=86) [2] r=0 lpr=86 pi=[56,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:31 np0005604375 python3.9[101456]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:53:31 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:53:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Feb  1 09:53:31 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb  1 09:53:32 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Feb  1 09:53:32 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb  1 09:53:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb  1 09:53:32 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Feb  1 09:53:32 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Feb  1 09:53:32 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 87 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=87) [2]/[0] r=0 lpr=87 pi=[56,87)/1 crt=55'484 lcod 55'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:32 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 87 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=87) [2]/[0] r=0 lpr=87 pi=[56,87)/1 crt=55'484 lcod 55'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:32 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[56,87)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:32 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[56,87)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Feb  1 09:53:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Feb  1 09:53:33 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Feb  1 09:53:33 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb  1 09:53:33 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 88 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=87/88 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[56,87)/1 crt=57'485 lcod 55'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:33 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Feb  1 09:53:33 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Feb  1 09:53:33 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:53:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Feb  1 09:53:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb  1 09:53:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Feb  1 09:53:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb  1 09:53:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Feb  1 09:53:34 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb  1 09:53:34 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Feb  1 09:53:34 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 89 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=87/88 n=6 ec=48/32 lis/c=87/56 les/c/f=88/57/0 sis=89 pruub=14.976616859s) [2] async=[2] r=-1 lpr=89 pi=[56,89)/1 crt=57'485 lcod 55'484 active pruub 168.043777466s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:34 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 89 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=87/88 n=6 ec=48/32 lis/c=87/56 les/c/f=88/57/0 sis=89 pruub=14.976508141s) [2] r=-1 lpr=89 pi=[56,89)/1 crt=57'485 lcod 55'484 unknown NOTIFY pruub 168.043777466s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:34 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 89 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=89 pruub=9.675940514s) [1] r=-1 lpr=89 pi=[55,89)/1 crt=38'483 active pruub 162.744354248s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:34 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 89 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=89 pruub=9.675878525s) [1] r=-1 lpr=89 pi=[55,89)/1 crt=38'483 unknown NOTIFY pruub 162.744354248s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:34 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=89) [1] r=0 lpr=89 pi=[55,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:34 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 89 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=87/56 les/c/f=88/57/0 sis=89) [2] r=0 lpr=89 pi=[56,89)/1 pct=0'0 crt=57'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:34 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 89 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=87/56 les/c/f=88/57/0 sis=89) [2] r=0 lpr=89 pi=[56,89)/1 crt=57'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:53:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Feb  1 09:53:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Feb  1 09:53:35 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Feb  1 09:53:35 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[55,90)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:35 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 90 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=90) [1]/[0] r=0 lpr=90 pi=[55,90)/1 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:35 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 90 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=90) [1]/[0] r=0 lpr=90 pi=[55,90)/1 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:35 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[55,90)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:35 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb  1 09:53:35 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 90 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=89/90 n=6 ec=48/32 lis/c=87/56 les/c/f=88/57/0 sis=89) [2] r=0 lpr=89 pi=[56,89)/1 crt=57'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:35 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Feb  1 09:53:35 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Feb  1 09:53:35 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Feb  1 09:53:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Feb  1 09:53:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb  1 09:53:36 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Feb  1 09:53:36 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Feb  1 09:53:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Feb  1 09:53:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb  1 09:53:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Feb  1 09:53:36 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Feb  1 09:53:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb  1 09:53:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb  1 09:53:36 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 91 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=90/91 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[55,90)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:36 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.a scrub starts
Feb  1 09:53:36 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.a scrub ok
Feb  1 09:53:37 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Feb  1 09:53:37 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Feb  1 09:53:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Feb  1 09:53:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Feb  1 09:53:37 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Feb  1 09:53:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 91 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=91 pruub=14.964743614s) [0] r=-1 lpr=91 pi=[65,91)/1 crt=38'483 active pruub 163.869369507s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:37 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 92 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=91 pruub=14.964620590s) [0] r=-1 lpr=91 pi=[65,91)/1 crt=38'483 unknown NOTIFY pruub 163.869369507s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=91) [0] r=0 lpr=92 pi=[65,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 92 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=90/91 n=6 ec=48/32 lis/c=90/55 les/c/f=91/56/0 sis=92 pruub=15.021860123s) [1] async=[1] r=-1 lpr=92 pi=[55,92)/1 crt=38'483 active pruub 171.127532959s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:37 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 92 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=90/91 n=6 ec=48/32 lis/c=90/55 les/c/f=91/56/0 sis=92 pruub=15.021791458s) [1] r=-1 lpr=92 pi=[55,92)/1 crt=38'483 unknown NOTIFY pruub 171.127532959s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 92 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=90/55 les/c/f=91/56/0 sis=92) [1] r=0 lpr=92 pi=[55,92)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:37 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 92 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=90/55 les/c/f=91/56/0 sis=92) [1] r=0 lpr=92 pi=[55,92)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:37 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Feb  1 09:53:38 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Feb  1 09:53:38 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Feb  1 09:53:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Feb  1 09:53:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Feb  1 09:53:38 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Feb  1 09:53:38 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 93 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=93) [0]/[2] r=0 lpr=93 pi=[65,93)/2 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:38 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 93 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=93) [0]/[2] r=0 lpr=93 pi=[65,93)/2 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:38 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[65,93)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:38 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[65,93)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:38 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 93 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=92/93 n=6 ec=48/32 lis/c=90/55 les/c/f=91/56/0 sis=92) [1] r=0 lpr=92 pi=[55,92)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Feb  1 09:53:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Feb  1 09:53:39 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Feb  1 09:53:39 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 94 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=93/94 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[65,93)/2 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:39 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:53:39 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Feb  1 09:53:39 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Feb  1 09:53:40 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Feb  1 09:53:40 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Feb  1 09:53:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:53:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Feb  1 09:53:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Feb  1 09:53:40 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Feb  1 09:53:40 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 95 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=93/94 n=6 ec=48/32 lis/c=93/65 les/c/f=94/66/0 sis=95 pruub=15.071164131s) [0] async=[0] r=-1 lpr=95 pi=[65,95)/2 crt=38'483 active pruub 166.934234619s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 95 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=93/65 les/c/f=94/66/0 sis=95) [0] r=0 lpr=95 pi=[65,95)/2 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:40 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 95 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=93/65 les/c/f=94/66/0 sis=95) [0] r=0 lpr=95 pi=[65,95)/2 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:40 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 95 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=93/94 n=6 ec=48/32 lis/c=93/65 les/c/f=94/66/0 sis=95 pruub=15.071048737s) [0] r=-1 lpr=95 pi=[65,95)/2 crt=38'483 unknown NOTIFY pruub 166.934234619s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Feb  1 09:53:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Feb  1 09:53:41 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Feb  1 09:53:41 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 96 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=95/96 n=6 ec=48/32 lis/c=93/65 les/c/f=94/66/0 sis=95) [0] r=0 lpr=95 pi=[65,95)/2 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:41 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Feb  1 09:53:42 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.d scrub starts
Feb  1 09:53:42 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.d scrub ok
Feb  1 09:53:43 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Feb  1 09:53:43 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Feb  1 09:53:43 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 41 B/s, 1 objects/s recovering
Feb  1 09:53:44 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Feb  1 09:53:44 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Feb  1 09:53:45 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Feb  1 09:53:45 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Feb  1 09:53:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:53:45 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 35 B/s, 1 objects/s recovering
Feb  1 09:53:47 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Feb  1 09:53:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Feb  1 09:53:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb  1 09:53:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Feb  1 09:53:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb  1 09:53:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Feb  1 09:53:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb  1 09:53:47 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Feb  1 09:53:48 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Feb  1 09:53:48 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Feb  1 09:53:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:53:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:53:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:53:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:53:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:53:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:53:48 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Feb  1 09:53:48 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Feb  1 09:53:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb  1 09:53:49 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Feb  1 09:53:49 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Feb  1 09:53:49 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:53:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Feb  1 09:53:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb  1 09:53:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Feb  1 09:53:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb  1 09:53:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Feb  1 09:53:49 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Feb  1 09:53:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb  1 09:53:50 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Feb  1 09:53:50 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Feb  1 09:53:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:53:50 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb  1 09:53:51 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.a scrub starts
Feb  1 09:53:51 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.a scrub ok
Feb  1 09:53:51 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:53:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Feb  1 09:53:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb  1 09:53:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Feb  1 09:53:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb  1 09:53:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Feb  1 09:53:51 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 99 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99 pruub=8.273852348s) [2] r=-1 lpr=99 pi=[55,99)/1 crt=57'486 lcod 57'486 active pruub 178.746078491s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:51 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 99 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99 pruub=8.273748398s) [2] r=-1 lpr=99 pi=[55,99)/1 crt=57'486 lcod 57'486 unknown NOTIFY pruub 178.746078491s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:51 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Feb  1 09:53:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:51 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb  1 09:53:52 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.b scrub starts
Feb  1 09:53:52 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.b scrub ok
Feb  1 09:53:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Feb  1 09:53:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Feb  1 09:53:52 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Feb  1 09:53:52 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 100 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=0 lpr=100 pi=[55,100)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:52 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 100 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=0 lpr=100 pi=[55,100)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:52 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:52 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:52 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb  1 09:53:53 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Feb  1 09:53:53 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Feb  1 09:53:53 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:53:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Feb  1 09:53:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb  1 09:53:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Feb  1 09:53:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb  1 09:53:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Feb  1 09:53:53 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Feb  1 09:53:53 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 101 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[55,100)/1 crt=57'487 lcod 57'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:53 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb  1 09:53:53 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb  1 09:53:54 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Feb  1 09:53:54 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Feb  1 09:53:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Feb  1 09:53:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Feb  1 09:53:54 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Feb  1 09:53:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102 pruub=14.973391533s) [2] async=[2] r=-1 lpr=102 pi=[55,102)/1 crt=57'487 lcod 57'486 active pruub 188.486877441s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:54 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102 pruub=14.973290443s) [2] r=-1 lpr=102 pi=[55,102)/1 crt=57'487 lcod 57'486 unknown NOTIFY pruub 188.486877441s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:53:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 pct=0'0 crt=57'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:53:54 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:53:55 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.c scrub starts
Feb  1 09:53:55 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.c scrub ok
Feb  1 09:53:55 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.a scrub starts
Feb  1 09:53:55 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.a scrub ok
Feb  1 09:53:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:53:55 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 103 B/s, 2 objects/s recovering
Feb  1 09:53:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.d scrub starts
Feb  1 09:53:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.d scrub ok
Feb  1 09:53:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Feb  1 09:53:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Feb  1 09:53:55 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Feb  1 09:53:55 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:53:57 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.f scrub starts
Feb  1 09:53:57 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.f scrub ok
Feb  1 09:53:57 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 84 B/s, 1 objects/s recovering
Feb  1 09:53:58 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Feb  1 09:53:58 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Feb  1 09:53:59 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Feb  1 09:53:59 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Feb  1 09:53:59 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Feb  1 09:54:00 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.c scrub starts
Feb  1 09:54:00 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.c scrub ok
Feb  1 09:54:00 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Feb  1 09:54:00 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Feb  1 09:54:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:54:00 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Feb  1 09:54:00 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Feb  1 09:54:01 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.b scrub starts
Feb  1 09:54:01 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.b scrub ok
Feb  1 09:54:01 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 52 B/s, 1 objects/s recovering
Feb  1 09:54:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Feb  1 09:54:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb  1 09:54:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Feb  1 09:54:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb  1 09:54:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb  1 09:54:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Feb  1 09:54:01 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Feb  1 09:54:02 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.e scrub starts
Feb  1 09:54:02 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.e scrub ok
Feb  1 09:54:02 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.e scrub starts
Feb  1 09:54:02 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.e scrub ok
Feb  1 09:54:02 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Feb  1 09:54:02 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Feb  1 09:54:02 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb  1 09:54:03 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Feb  1 09:54:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb  1 09:54:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Feb  1 09:54:03 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb  1 09:54:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb  1 09:54:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Feb  1 09:54:04 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Feb  1 09:54:04 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064671516s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 active pruub 187.699172974s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:04 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:54:04 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:54:04 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Feb  1 09:54:04 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Feb  1 09:54:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Feb  1 09:54:05 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:05 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb  1 09:54:05 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:05 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:54:05 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.d scrub starts
Feb  1 09:54:05 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.d scrub ok
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:54:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:54:05 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Feb  1 09:54:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Feb  1 09:54:06 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Feb  1 09:54:06 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:54:06 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:54:06 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:54:06 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:54:06 np0005604375 podman[101750]: 2026-02-01 14:54:06.107436639 +0000 UTC m=+0.044505823 container create 6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  1 09:54:06 np0005604375 systemd[1]: Started libpod-conmon-6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0.scope.
Feb  1 09:54:06 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:54:06 np0005604375 podman[101750]: 2026-02-01 14:54:06.082008429 +0000 UTC m=+0.019077663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:54:06 np0005604375 podman[101750]: 2026-02-01 14:54:06.186430103 +0000 UTC m=+0.123499337 container init 6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 09:54:06 np0005604375 podman[101750]: 2026-02-01 14:54:06.194353134 +0000 UTC m=+0.131422318 container start 6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_boyd, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:54:06 np0005604375 flamboyant_boyd[101766]: 167 167
Feb  1 09:54:06 np0005604375 systemd[1]: libpod-6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0.scope: Deactivated successfully.
Feb  1 09:54:06 np0005604375 podman[101750]: 2026-02-01 14:54:06.199154788 +0000 UTC m=+0.136223972 container attach 6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_boyd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  1 09:54:06 np0005604375 podman[101750]: 2026-02-01 14:54:06.199475897 +0000 UTC m=+0.136545081 container died 6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  1 09:54:06 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f602a4d7c7890c421c9fff58c0da8205e9e65094ef0930f6546bd2bc37dd4b1e-merged.mount: Deactivated successfully.
Feb  1 09:54:06 np0005604375 podman[101750]: 2026-02-01 14:54:06.24401889 +0000 UTC m=+0.181088054 container remove 6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_boyd, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  1 09:54:06 np0005604375 systemd[1]: libpod-conmon-6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0.scope: Deactivated successfully.
Feb  1 09:54:06 np0005604375 podman[101789]: 2026-02-01 14:54:06.392628648 +0000 UTC m=+0.044042330 container create c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:54:06 np0005604375 systemd[1]: Started libpod-conmon-c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1.scope.
Feb  1 09:54:06 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:54:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa8ab999c00936ad17414f083501e55e484fd3159517098ac9aedf161d424d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:54:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa8ab999c00936ad17414f083501e55e484fd3159517098ac9aedf161d424d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:54:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa8ab999c00936ad17414f083501e55e484fd3159517098ac9aedf161d424d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:54:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa8ab999c00936ad17414f083501e55e484fd3159517098ac9aedf161d424d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:54:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa8ab999c00936ad17414f083501e55e484fd3159517098ac9aedf161d424d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:54:06 np0005604375 podman[101789]: 2026-02-01 14:54:06.465022768 +0000 UTC m=+0.116436450 container init c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:54:06 np0005604375 podman[101789]: 2026-02-01 14:54:06.372443235 +0000 UTC m=+0.023856937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:54:06 np0005604375 podman[101789]: 2026-02-01 14:54:06.475609734 +0000 UTC m=+0.127023406 container start c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  1 09:54:06 np0005604375 podman[101789]: 2026-02-01 14:54:06.479224515 +0000 UTC m=+0.130638257 container attach c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb  1 09:54:06 np0005604375 hardcore_chatterjee[101806]: --> passed data devices: 0 physical, 3 LVM
Feb  1 09:54:06 np0005604375 hardcore_chatterjee[101806]: --> All data devices are unavailable
Feb  1 09:54:06 np0005604375 systemd[1]: libpod-c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1.scope: Deactivated successfully.
Feb  1 09:54:06 np0005604375 podman[101789]: 2026-02-01 14:54:06.970936978 +0000 UTC m=+0.622350640 container died c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Feb  1 09:54:07 np0005604375 systemd[1]: var-lib-containers-storage-overlay-dfa8ab999c00936ad17414f083501e55e484fd3159517098ac9aedf161d424d0-merged.mount: Deactivated successfully.
Feb  1 09:54:07 np0005604375 podman[101789]: 2026-02-01 14:54:07.023453444 +0000 UTC m=+0.674867146 container remove c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:54:07 np0005604375 systemd[1]: libpod-conmon-c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1.scope: Deactivated successfully.
Feb  1 09:54:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Feb  1 09:54:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Feb  1 09:54:07 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Feb  1 09:54:07 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977913857s) [0] async=[0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 active pruub 193.453887939s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:07 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:54:07 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 pct=0'0 crt=57'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:07 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:54:07 np0005604375 podman[101902]: 2026-02-01 14:54:07.499637573 +0000 UTC m=+0.045734507 container create 7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  1 09:54:07 np0005604375 systemd[1]: Started libpod-conmon-7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682.scope.
Feb  1 09:54:07 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:54:07 np0005604375 podman[101902]: 2026-02-01 14:54:07.482771112 +0000 UTC m=+0.028868076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:54:07 np0005604375 podman[101902]: 2026-02-01 14:54:07.582208137 +0000 UTC m=+0.128305141 container init 7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_agnesi, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  1 09:54:07 np0005604375 podman[101902]: 2026-02-01 14:54:07.587522756 +0000 UTC m=+0.133619720 container start 7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_agnesi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:54:07 np0005604375 podman[101902]: 2026-02-01 14:54:07.593202054 +0000 UTC m=+0.139298978 container attach 7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  1 09:54:07 np0005604375 elegant_agnesi[101919]: 167 167
Feb  1 09:54:07 np0005604375 systemd[1]: libpod-7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682.scope: Deactivated successfully.
Feb  1 09:54:07 np0005604375 podman[101902]: 2026-02-01 14:54:07.594071538 +0000 UTC m=+0.140168502 container died 7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_agnesi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:54:07 np0005604375 systemd[1]: var-lib-containers-storage-overlay-27938188ca148d3415040e8a8212c0662c5118535d5cf00d77a178e03958f685-merged.mount: Deactivated successfully.
Feb  1 09:54:07 np0005604375 podman[101902]: 2026-02-01 14:54:07.627328766 +0000 UTC m=+0.173425700 container remove 7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:54:07 np0005604375 systemd[1]: libpod-conmon-7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682.scope: Deactivated successfully.
Feb  1 09:54:07 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Feb  1 09:54:07 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:07 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Feb  1 09:54:07 np0005604375 podman[101943]: 2026-02-01 14:54:07.783562577 +0000 UTC m=+0.065772507 container create 6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hellman, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:54:07 np0005604375 systemd[1]: Started libpod-conmon-6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0.scope.
Feb  1 09:54:07 np0005604375 podman[101943]: 2026-02-01 14:54:07.751538283 +0000 UTC m=+0.033748263 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:54:07 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:54:07 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ed0f6e9cb0b5c9cf2f1e8042a0e905c4fdb2263318488b88d3dc14fda0adff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:54:07 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ed0f6e9cb0b5c9cf2f1e8042a0e905c4fdb2263318488b88d3dc14fda0adff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:54:07 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ed0f6e9cb0b5c9cf2f1e8042a0e905c4fdb2263318488b88d3dc14fda0adff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:54:07 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ed0f6e9cb0b5c9cf2f1e8042a0e905c4fdb2263318488b88d3dc14fda0adff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:54:07 np0005604375 podman[101943]: 2026-02-01 14:54:07.896226291 +0000 UTC m=+0.178436211 container init 6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  1 09:54:07 np0005604375 podman[101943]: 2026-02-01 14:54:07.902751253 +0000 UTC m=+0.184961153 container start 6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hellman, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  1 09:54:07 np0005604375 podman[101943]: 2026-02-01 14:54:07.906484567 +0000 UTC m=+0.188694487 container attach 6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  1 09:54:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Feb  1 09:54:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Feb  1 09:54:08 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Feb  1 09:54:08 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=108/109 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]: {
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:    "0": [
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:        {
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "devices": [
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "/dev/loop3"
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            ],
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_name": "ceph_lv0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_size": "21470642176",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "name": "ceph_lv0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "tags": {
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.cluster_name": "ceph",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.crush_device_class": "",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.encrypted": "0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.objectstore": "bluestore",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.osd_id": "0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.type": "block",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.vdo": "0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.with_tpm": "0"
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            },
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "type": "block",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "vg_name": "ceph_vg0"
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:        }
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:    ],
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:    "1": [
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:        {
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "devices": [
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "/dev/loop4"
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            ],
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_name": "ceph_lv1",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_size": "21470642176",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "name": "ceph_lv1",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "tags": {
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.cluster_name": "ceph",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.crush_device_class": "",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.encrypted": "0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.objectstore": "bluestore",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.osd_id": "1",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.type": "block",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.vdo": "0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.with_tpm": "0"
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            },
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "type": "block",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "vg_name": "ceph_vg1"
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:        }
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:    ],
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:    "2": [
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:        {
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "devices": [
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "/dev/loop5"
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            ],
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_name": "ceph_lv2",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_size": "21470642176",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "name": "ceph_lv2",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "tags": {
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.cluster_name": "ceph",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.crush_device_class": "",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.encrypted": "0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.objectstore": "bluestore",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.osd_id": "2",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.type": "block",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.vdo": "0",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:                "ceph.with_tpm": "0"
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            },
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "type": "block",
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:            "vg_name": "ceph_vg2"
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:        }
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]:    ]
Feb  1 09:54:08 np0005604375 thirsty_hellman[101960]: }
Feb  1 09:54:08 np0005604375 systemd[1]: libpod-6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0.scope: Deactivated successfully.
Feb  1 09:54:08 np0005604375 podman[101943]: 2026-02-01 14:54:08.257088432 +0000 UTC m=+0.539298372 container died 6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  1 09:54:08 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c0ed0f6e9cb0b5c9cf2f1e8042a0e905c4fdb2263318488b88d3dc14fda0adff-merged.mount: Deactivated successfully.
Feb  1 09:54:08 np0005604375 podman[101943]: 2026-02-01 14:54:08.318184537 +0000 UTC m=+0.600394457 container remove 6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hellman, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:54:08 np0005604375 systemd[1]: libpod-conmon-6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0.scope: Deactivated successfully.
Feb  1 09:54:08 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Feb  1 09:54:08 np0005604375 podman[102042]: 2026-02-01 14:54:08.723801117 +0000 UTC m=+0.041901980 container create 833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb  1 09:54:08 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Feb  1 09:54:08 np0005604375 systemd[1]: Started libpod-conmon-833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037.scope.
Feb  1 09:54:08 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:54:08 np0005604375 podman[102042]: 2026-02-01 14:54:08.701528386 +0000 UTC m=+0.019629199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:54:08 np0005604375 podman[102042]: 2026-02-01 14:54:08.805065435 +0000 UTC m=+0.123166358 container init 833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:54:08 np0005604375 podman[102042]: 2026-02-01 14:54:08.812790541 +0000 UTC m=+0.130891394 container start 833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  1 09:54:08 np0005604375 podman[102042]: 2026-02-01 14:54:08.816495404 +0000 UTC m=+0.134596317 container attach 833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  1 09:54:08 np0005604375 systemd[1]: libpod-833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037.scope: Deactivated successfully.
Feb  1 09:54:08 np0005604375 nice_gates[102058]: 167 167
Feb  1 09:54:08 np0005604375 conmon[102058]: conmon 833ac2b26a960b2b3175 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037.scope/container/memory.events
Feb  1 09:54:08 np0005604375 podman[102042]: 2026-02-01 14:54:08.81920893 +0000 UTC m=+0.137309773 container died 833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:54:08 np0005604375 systemd[1]: var-lib-containers-storage-overlay-593bf71f63e169de258920f10740b68cf7a8e0924e10a6aae28c9b4fbbaee207-merged.mount: Deactivated successfully.
Feb  1 09:54:08 np0005604375 podman[102042]: 2026-02-01 14:54:08.866993133 +0000 UTC m=+0.185093996 container remove 833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:54:08 np0005604375 systemd[1]: libpod-conmon-833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037.scope: Deactivated successfully.
Feb  1 09:54:09 np0005604375 podman[102079]: 2026-02-01 14:54:09.041462743 +0000 UTC m=+0.054267766 container create 13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclean, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  1 09:54:09 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Feb  1 09:54:09 np0005604375 systemd[1]: Started libpod-conmon-13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb.scope.
Feb  1 09:54:09 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Feb  1 09:54:09 np0005604375 podman[102079]: 2026-02-01 14:54:09.013896183 +0000 UTC m=+0.026701266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:54:09 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:54:09 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9371cc6183666b943effd8bfaae61be0d02294b78a58384f763c1d4c8f34e7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:54:09 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9371cc6183666b943effd8bfaae61be0d02294b78a58384f763c1d4c8f34e7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:54:09 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9371cc6183666b943effd8bfaae61be0d02294b78a58384f763c1d4c8f34e7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:54:09 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9371cc6183666b943effd8bfaae61be0d02294b78a58384f763c1d4c8f34e7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:54:09 np0005604375 podman[102079]: 2026-02-01 14:54:09.139730114 +0000 UTC m=+0.152535137 container init 13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclean, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  1 09:54:09 np0005604375 podman[102079]: 2026-02-01 14:54:09.145524616 +0000 UTC m=+0.158329639 container start 13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclean, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:54:09 np0005604375 podman[102079]: 2026-02-01 14:54:09.149197958 +0000 UTC m=+0.162003041 container attach 13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclean, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:54:09 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:09 np0005604375 lvm[102176]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:54:09 np0005604375 lvm[102176]: VG ceph_vg1 finished
Feb  1 09:54:09 np0005604375 lvm[102175]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:54:09 np0005604375 lvm[102175]: VG ceph_vg0 finished
Feb  1 09:54:09 np0005604375 lvm[102178]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:54:09 np0005604375 lvm[102178]: VG ceph_vg2 finished
Feb  1 09:54:09 np0005604375 mystifying_mclean[102096]: {}
Feb  1 09:54:09 np0005604375 systemd[1]: libpod-13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb.scope: Deactivated successfully.
Feb  1 09:54:09 np0005604375 podman[102079]: 2026-02-01 14:54:09.907038438 +0000 UTC m=+0.919843491 container died 13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclean, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:54:09 np0005604375 systemd[1]: libpod-13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb.scope: Consumed 1.087s CPU time.
Feb  1 09:54:09 np0005604375 systemd[1]: var-lib-containers-storage-overlay-b9371cc6183666b943effd8bfaae61be0d02294b78a58384f763c1d4c8f34e7e-merged.mount: Deactivated successfully.
Feb  1 09:54:09 np0005604375 podman[102079]: 2026-02-01 14:54:09.942399425 +0000 UTC m=+0.955204418 container remove 13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclean, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  1 09:54:09 np0005604375 systemd[1]: libpod-conmon-13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb.scope: Deactivated successfully.
Feb  1 09:54:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:54:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:54:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:54:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:54:10 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:54:10 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:54:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:54:10 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Feb  1 09:54:10 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Feb  1 09:54:11 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Feb  1 09:54:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Feb  1 09:54:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb  1 09:54:11 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Feb  1 09:54:11 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Feb  1 09:54:12 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Feb  1 09:54:12 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Feb  1 09:54:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Feb  1 09:54:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb  1 09:54:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Feb  1 09:54:12 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Feb  1 09:54:12 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb  1 09:54:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb  1 09:54:13 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 62 B/s, 1 objects/s recovering
Feb  1 09:54:13 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Feb  1 09:54:13 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb  1 09:54:13 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Feb  1 09:54:13 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Feb  1 09:54:13 np0005604375 python3.9[102368]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:54:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Feb  1 09:54:14 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb  1 09:54:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb  1 09:54:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Feb  1 09:54:14 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Feb  1 09:54:14 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319766998s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 active pruub 195.875000000s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:14 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:54:14 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:54:14 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Feb  1 09:54:14 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Feb  1 09:54:14 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Feb  1 09:54:14 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Feb  1 09:54:15 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Feb  1 09:54:15 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Feb  1 09:54:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Feb  1 09:54:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Feb  1 09:54:15 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Feb  1 09:54:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb  1 09:54:15 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:15 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:54:15 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:15 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:54:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:54:15 np0005604375 python3.9[102655]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb  1 09:54:15 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Feb  1 09:54:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Feb  1 09:54:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Feb  1 09:54:16 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Feb  1 09:54:16 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:54:16 np0005604375 python3.9[102807]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb  1 09:54:16 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Feb  1 09:54:16 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Feb  1 09:54:17 np0005604375 python3.9[102959]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:54:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Feb  1 09:54:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Feb  1 09:54:17 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Feb  1 09:54:17 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 pct=0'0 crt=57'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:17 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:54:17 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244387627s) [0] async=[0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 active pruub 203.837112427s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:17 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:54:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:54:17
Feb  1 09:54:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 09:54:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Feb  1 09:54:17 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:17 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Feb  1 09:54:17 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Feb  1 09:54:17 np0005604375 python3.9[103111]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb  1 09:54:18 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Feb  1 09:54:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Feb  1 09:54:18 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Feb  1 09:54:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Feb  1 09:54:18 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Feb  1 09:54:18 np0005604375 ceph-osd[85969]: osd.0 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=114/115 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:54:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:54:18 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Feb  1 09:54:18 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Feb  1 09:54:19 np0005604375 python3.9[103263]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:54:19 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:19 np0005604375 python3.9[103415]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:54:20 np0005604375 python3.9[103493]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:54:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:54:20 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Feb  1 09:54:20 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Feb  1 09:54:21 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Feb  1 09:54:21 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Feb  1 09:54:21 np0005604375 python3.9[103645]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:54:21 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 0 objects/s recovering
Feb  1 09:54:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  1 09:54:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:54:22 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Feb  1 09:54:22 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Feb  1 09:54:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Feb  1 09:54:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:54:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Feb  1 09:54:22 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Feb  1 09:54:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.231036186s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 active pruub 204.880294800s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:22 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:54:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  1 09:54:22 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:54:22 np0005604375 python3.9[103799]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb  1 09:54:23 np0005604375 python3.9[103952]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb  1 09:54:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Feb  1 09:54:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Feb  1 09:54:23 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Feb  1 09:54:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  1 09:54:23 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:23 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 09:54:23 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:23 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 09:54:23 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 0 objects/s recovering
Feb  1 09:54:23 np0005604375 python3.9[104105]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  1 09:54:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Feb  1 09:54:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Feb  1 09:54:24 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Feb  1 09:54:24 np0005604375 python3.9[104257]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb  1 09:54:24 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Feb  1 09:54:24 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:54:24 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Feb  1 09:54:25 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Feb  1 09:54:25 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Feb  1 09:54:25 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Feb  1 09:54:25 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Feb  1 09:54:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Feb  1 09:54:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Feb  1 09:54:25 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Feb  1 09:54:25 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:25 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 09:54:25 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578289032s) [1] async=[1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 active pruub 212.280242920s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 09:54:25 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 09:54:25 np0005604375 python3.9[104409]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:54:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:54:25 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Feb  1 09:54:25 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Feb  1 09:54:25 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Feb  1 09:54:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Feb  1 09:54:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Feb  1 09:54:26 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Feb  1 09:54:26 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 09:54:26 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Feb  1 09:54:26 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Feb  1 09:54:27 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Feb  1 09:54:27 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Feb  1 09:54:27 np0005604375 python3.9[104562]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Feb  1 09:54:27 np0005604375 python3.9[104714]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1243115580546916e-06 of space, bias 4.0, pg target 0.00254917386966563 quantized to 16 (current 16)
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.260577423976037e-06 of space, bias 1.0, pg target 0.001278173227192811 quantized to 32 (current 32)
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:54:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 09:54:28 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Feb  1 09:54:28 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Feb  1 09:54:28 np0005604375 python3.9[104792]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:54:29 np0005604375 python3.9[104944]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:54:29 np0005604375 python3.9[105022]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:54:29 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Feb  1 09:54:30 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Feb  1 09:54:30 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Feb  1 09:54:30 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Feb  1 09:54:30 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Feb  1 09:54:30 np0005604375 python3.9[105174]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:54:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:54:31 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Feb  1 09:54:31 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Feb  1 09:54:31 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Feb  1 09:54:31 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Feb  1 09:54:31 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Feb  1 09:54:32 np0005604375 python3.9[105325]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:54:32 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Feb  1 09:54:32 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Feb  1 09:54:32 np0005604375 python3.9[105477]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb  1 09:54:33 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Feb  1 09:54:33 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Feb  1 09:54:33 np0005604375 python3.9[105627]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:54:33 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Feb  1 09:54:34 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Feb  1 09:54:34 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Feb  1 09:54:34 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Feb  1 09:54:34 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Feb  1 09:54:34 np0005604375 python3.9[105779]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:54:34 np0005604375 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb  1 09:54:34 np0005604375 systemd[1]: tuned.service: Deactivated successfully.
Feb  1 09:54:34 np0005604375 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb  1 09:54:34 np0005604375 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb  1 09:54:35 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Feb  1 09:54:35 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Feb  1 09:54:35 np0005604375 systemd[1]: Started Dynamic System Tuning Daemon.
Feb  1 09:54:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:54:35 np0005604375 python3.9[105941]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb  1 09:54:35 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:35 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Feb  1 09:54:35 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Feb  1 09:54:35 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.f scrub starts
Feb  1 09:54:35 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.f scrub ok
Feb  1 09:54:37 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:37 np0005604375 python3.9[106093]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:54:38 np0005604375 python3.9[106247]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:54:39 np0005604375 systemd[1]: session-34.scope: Deactivated successfully.
Feb  1 09:54:39 np0005604375 systemd[1]: session-34.scope: Consumed 1min 429ms CPU time.
Feb  1 09:54:39 np0005604375 systemd-logind[786]: Session 34 logged out. Waiting for processes to exit.
Feb  1 09:54:39 np0005604375 systemd-logind[786]: Removed session 34.
Feb  1 09:54:39 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Feb  1 09:54:39 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Feb  1 09:54:39 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:39 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Feb  1 09:54:39 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Feb  1 09:54:40 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Feb  1 09:54:40 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Feb  1 09:54:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:54:41 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.c scrub starts
Feb  1 09:54:41 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.c scrub ok
Feb  1 09:54:41 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:43 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Feb  1 09:54:43 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Feb  1 09:54:43 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:43 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Feb  1 09:54:43 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Feb  1 09:54:44 np0005604375 systemd-logind[786]: New session 35 of user zuul.
Feb  1 09:54:44 np0005604375 systemd[1]: Started Session 35 of User zuul.
Feb  1 09:54:45 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Feb  1 09:54:45 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Feb  1 09:54:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:54:45 np0005604375 python3.9[106427]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:54:45 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:46 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.d scrub starts
Feb  1 09:54:46 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.d scrub ok
Feb  1 09:54:46 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.f scrub starts
Feb  1 09:54:46 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.f scrub ok
Feb  1 09:54:46 np0005604375 python3.9[106583]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb  1 09:54:47 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Feb  1 09:54:47 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Feb  1 09:54:47 np0005604375 python3.9[106736]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:54:47 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:48 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Feb  1 09:54:48 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Feb  1 09:54:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:54:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:54:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:54:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:54:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:54:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:54:48 np0005604375 python3.9[106820]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  1 09:54:49 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Feb  1 09:54:49 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Feb  1 09:54:49 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Feb  1 09:54:49 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Feb  1 09:54:49 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:50 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Feb  1 09:54:50 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Feb  1 09:54:50 np0005604375 python3.9[106973]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:54:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:54:51 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:52 np0005604375 python3.9[107126]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  1 09:54:53 np0005604375 python3.9[107279]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:54:53 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:54 np0005604375 python3.9[107431]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb  1 09:54:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:54:55 np0005604375 python3.9[107581]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:54:55 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:56 np0005604375 python3.9[107739]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:54:57 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:54:58 np0005604375 python3.9[107892]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:54:59 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Feb  1 09:54:59 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Feb  1 09:54:59 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Feb  1 09:54:59 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Feb  1 09:54:59 np0005604375 python3.9[108179]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb  1 09:54:59 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:55:00 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.f scrub starts
Feb  1 09:55:00 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.f scrub ok
Feb  1 09:55:00 np0005604375 python3.9[108329]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:55:01 np0005604375 python3.9[108483]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:55:01 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.a scrub starts
Feb  1 09:55:01 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.a scrub ok
Feb  1 09:55:01 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:01 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Feb  1 09:55:01 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Feb  1 09:55:02 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.b scrub starts
Feb  1 09:55:02 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.b scrub ok
Feb  1 09:55:02 np0005604375 python3.9[108636]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:55:03 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:03 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Feb  1 09:55:03 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Feb  1 09:55:04 np0005604375 python3.9[108789]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:55:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:55:05 np0005604375 python3.9[108943]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Feb  1 09:55:05 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:06 np0005604375 systemd[1]: session-35.scope: Deactivated successfully.
Feb  1 09:55:06 np0005604375 systemd[1]: session-35.scope: Consumed 16.701s CPU time.
Feb  1 09:55:06 np0005604375 systemd-logind[786]: Session 35 logged out. Waiting for processes to exit.
Feb  1 09:55:06 np0005604375 systemd-logind[786]: Removed session 35.
Feb  1 09:55:07 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Feb  1 09:55:07 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Feb  1 09:55:07 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.d scrub starts
Feb  1 09:55:07 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.d scrub ok
Feb  1 09:55:07 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:07 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Feb  1 09:55:07 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Feb  1 09:55:09 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.b scrub starts
Feb  1 09:55:09 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.b scrub ok
Feb  1 09:55:09 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:09 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Feb  1 09:55:09 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Feb  1 09:55:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:55:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:55:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:55:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:55:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:55:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:55:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:55:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 09:55:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 09:55:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 09:55:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:55:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:55:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:55:10 np0005604375 podman[109111]: 2026-02-01 14:55:10.894406862 +0000 UTC m=+0.037138476 container create 11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:55:10 np0005604375 systemd[1]: Started libpod-conmon-11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8.scope.
Feb  1 09:55:10 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:55:10 np0005604375 podman[109111]: 2026-02-01 14:55:10.965760335 +0000 UTC m=+0.108491999 container init 11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_yalow, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb  1 09:55:10 np0005604375 podman[109111]: 2026-02-01 14:55:10.971354134 +0000 UTC m=+0.114085778 container start 11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_yalow, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:55:10 np0005604375 podman[109111]: 2026-02-01 14:55:10.974413904 +0000 UTC m=+0.117145558 container attach 11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_yalow, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  1 09:55:10 np0005604375 boring_yalow[109128]: 167 167
Feb  1 09:55:10 np0005604375 systemd[1]: libpod-11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8.scope: Deactivated successfully.
Feb  1 09:55:10 np0005604375 podman[109111]: 2026-02-01 14:55:10.879411527 +0000 UTC m=+0.022143171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:55:10 np0005604375 podman[109111]: 2026-02-01 14:55:10.976941293 +0000 UTC m=+0.119672947 container died 11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  1 09:55:11 np0005604375 systemd[1]: var-lib-containers-storage-overlay-8b6b37eb680aeefc885fa3a227901fcd92db82d913c737883cbf0ff83a32e479-merged.mount: Deactivated successfully.
Feb  1 09:55:11 np0005604375 podman[109111]: 2026-02-01 14:55:11.019088043 +0000 UTC m=+0.161819687 container remove 11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  1 09:55:11 np0005604375 systemd[1]: libpod-conmon-11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8.scope: Deactivated successfully.
Feb  1 09:55:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:55:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:55:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:55:11 np0005604375 podman[109152]: 2026-02-01 14:55:11.180967681 +0000 UTC m=+0.044824093 container create 506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  1 09:55:11 np0005604375 systemd[1]: Started libpod-conmon-506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67.scope.
Feb  1 09:55:11 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:55:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec2bdd3771e177a70f953ba16819b51e2ac696c5f8291b2abdaf26f3b352f16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:55:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec2bdd3771e177a70f953ba16819b51e2ac696c5f8291b2abdaf26f3b352f16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:55:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec2bdd3771e177a70f953ba16819b51e2ac696c5f8291b2abdaf26f3b352f16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:55:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec2bdd3771e177a70f953ba16819b51e2ac696c5f8291b2abdaf26f3b352f16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:55:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec2bdd3771e177a70f953ba16819b51e2ac696c5f8291b2abdaf26f3b352f16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:55:11 np0005604375 podman[109152]: 2026-02-01 14:55:11.159327193 +0000 UTC m=+0.023183645 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:55:11 np0005604375 podman[109152]: 2026-02-01 14:55:11.268285442 +0000 UTC m=+0.132141904 container init 506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  1 09:55:11 np0005604375 podman[109152]: 2026-02-01 14:55:11.274226489 +0000 UTC m=+0.138082901 container start 506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  1 09:55:11 np0005604375 podman[109152]: 2026-02-01 14:55:11.279231584 +0000 UTC m=+0.143087996 container attach 506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:55:11 np0005604375 systemd-logind[786]: New session 36 of user zuul.
Feb  1 09:55:11 np0005604375 systemd[1]: Started Session 36 of User zuul.
Feb  1 09:55:11 np0005604375 jovial_jackson[109169]: --> passed data devices: 0 physical, 3 LVM
Feb  1 09:55:11 np0005604375 jovial_jackson[109169]: --> All data devices are unavailable
Feb  1 09:55:11 np0005604375 systemd[1]: libpod-506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67.scope: Deactivated successfully.
Feb  1 09:55:11 np0005604375 podman[109152]: 2026-02-01 14:55:11.67945126 +0000 UTC m=+0.543307692 container died 506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:55:11 np0005604375 systemd[1]: var-lib-containers-storage-overlay-fec2bdd3771e177a70f953ba16819b51e2ac696c5f8291b2abdaf26f3b352f16-merged.mount: Deactivated successfully.
Feb  1 09:55:11 np0005604375 podman[109152]: 2026-02-01 14:55:11.736501974 +0000 UTC m=+0.600358376 container remove 506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:55:11 np0005604375 systemd[1]: libpod-conmon-506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67.scope: Deactivated successfully.
Feb  1 09:55:11 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:12 np0005604375 podman[109320]: 2026-02-01 14:55:12.151094301 +0000 UTC m=+0.035055048 container create 9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  1 09:55:12 np0005604375 systemd[1]: Started libpod-conmon-9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb.scope.
Feb  1 09:55:12 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:55:12 np0005604375 podman[109320]: 2026-02-01 14:55:12.22617595 +0000 UTC m=+0.110136717 container init 9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  1 09:55:12 np0005604375 podman[109320]: 2026-02-01 14:55:12.231643226 +0000 UTC m=+0.115603973 container start 9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  1 09:55:12 np0005604375 busy_brahmagupta[109366]: 167 167
Feb  1 09:55:12 np0005604375 podman[109320]: 2026-02-01 14:55:12.137242462 +0000 UTC m=+0.021203229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:55:12 np0005604375 systemd[1]: libpod-9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb.scope: Deactivated successfully.
Feb  1 09:55:12 np0005604375 podman[109320]: 2026-02-01 14:55:12.235448414 +0000 UTC m=+0.119409191 container attach 9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_brahmagupta, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:55:12 np0005604375 podman[109320]: 2026-02-01 14:55:12.236339694 +0000 UTC m=+0.120300461 container died 9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  1 09:55:12 np0005604375 systemd[1]: var-lib-containers-storage-overlay-8199746a437db738c2ae41b8f196bff639527a9e1912001c16afe7c92656fda0-merged.mount: Deactivated successfully.
Feb  1 09:55:12 np0005604375 podman[109320]: 2026-02-01 14:55:12.269891927 +0000 UTC m=+0.153852674 container remove 9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Feb  1 09:55:12 np0005604375 systemd[1]: libpod-conmon-9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb.scope: Deactivated successfully.
Feb  1 09:55:12 np0005604375 podman[109460]: 2026-02-01 14:55:12.385040809 +0000 UTC m=+0.030250368 container create 2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:55:12 np0005604375 systemd[1]: Started libpod-conmon-2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528.scope.
Feb  1 09:55:12 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:55:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84456523c343a859ba8f46e64bad72068fe09741f44d009fc894f33f45d85df2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:55:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84456523c343a859ba8f46e64bad72068fe09741f44d009fc894f33f45d85df2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:55:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84456523c343a859ba8f46e64bad72068fe09741f44d009fc894f33f45d85df2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:55:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84456523c343a859ba8f46e64bad72068fe09741f44d009fc894f33f45d85df2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:55:12 np0005604375 podman[109460]: 2026-02-01 14:55:12.457262282 +0000 UTC m=+0.102471841 container init 2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:55:12 np0005604375 podman[109460]: 2026-02-01 14:55:12.462003941 +0000 UTC m=+0.107213500 container start 2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:55:12 np0005604375 podman[109460]: 2026-02-01 14:55:12.370953444 +0000 UTC m=+0.016163023 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:55:12 np0005604375 podman[109460]: 2026-02-01 14:55:12.476585437 +0000 UTC m=+0.121795016 container attach 2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_easley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:55:12 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Feb  1 09:55:12 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Feb  1 09:55:12 np0005604375 python3.9[109454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:55:12 np0005604375 recursing_easley[109477]: {
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:    "0": [
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:        {
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "devices": [
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "/dev/loop3"
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            ],
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_name": "ceph_lv0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_size": "21470642176",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "name": "ceph_lv0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "tags": {
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.cluster_name": "ceph",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.crush_device_class": "",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.encrypted": "0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.objectstore": "bluestore",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.osd_id": "0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.type": "block",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.vdo": "0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.with_tpm": "0"
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            },
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "type": "block",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "vg_name": "ceph_vg0"
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:        }
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:    ],
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:    "1": [
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:        {
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "devices": [
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "/dev/loop4"
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            ],
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_name": "ceph_lv1",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_size": "21470642176",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "name": "ceph_lv1",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "tags": {
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.cluster_name": "ceph",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.crush_device_class": "",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.encrypted": "0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.objectstore": "bluestore",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.osd_id": "1",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.type": "block",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.vdo": "0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.with_tpm": "0"
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            },
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "type": "block",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "vg_name": "ceph_vg1"
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:        }
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:    ],
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:    "2": [
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:        {
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "devices": [
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "/dev/loop5"
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            ],
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_name": "ceph_lv2",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_size": "21470642176",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "name": "ceph_lv2",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "tags": {
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.cluster_name": "ceph",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.crush_device_class": "",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.encrypted": "0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.objectstore": "bluestore",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.osd_id": "2",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.type": "block",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.vdo": "0",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:                "ceph.with_tpm": "0"
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            },
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "type": "block",
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:            "vg_name": "ceph_vg2"
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:        }
Feb  1 09:55:12 np0005604375 recursing_easley[109477]:    ]
Feb  1 09:55:12 np0005604375 recursing_easley[109477]: }
Feb  1 09:55:12 np0005604375 podman[109460]: 2026-02-01 14:55:12.764664141 +0000 UTC m=+0.409873700 container died 2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  1 09:55:12 np0005604375 systemd[1]: libpod-2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528.scope: Deactivated successfully.
Feb  1 09:55:12 np0005604375 systemd[1]: var-lib-containers-storage-overlay-84456523c343a859ba8f46e64bad72068fe09741f44d009fc894f33f45d85df2-merged.mount: Deactivated successfully.
Feb  1 09:55:12 np0005604375 podman[109460]: 2026-02-01 14:55:12.809117964 +0000 UTC m=+0.454327533 container remove 2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Feb  1 09:55:12 np0005604375 systemd[1]: libpod-conmon-2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528.scope: Deactivated successfully.
Feb  1 09:55:13 np0005604375 podman[109664]: 2026-02-01 14:55:13.158202153 +0000 UTC m=+0.037672968 container create b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  1 09:55:13 np0005604375 systemd[1]: Started libpod-conmon-b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca.scope.
Feb  1 09:55:13 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:55:13 np0005604375 podman[109664]: 2026-02-01 14:55:13.20928136 +0000 UTC m=+0.088752235 container init b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:55:13 np0005604375 podman[109664]: 2026-02-01 14:55:13.214125371 +0000 UTC m=+0.093596186 container start b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:55:13 np0005604375 busy_yalow[109730]: 167 167
Feb  1 09:55:13 np0005604375 systemd[1]: libpod-b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca.scope: Deactivated successfully.
Feb  1 09:55:13 np0005604375 podman[109664]: 2026-02-01 14:55:13.217164131 +0000 UTC m=+0.096634966 container attach b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  1 09:55:13 np0005604375 conmon[109730]: conmon b4323aad3b6d1e1bb5aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca.scope/container/memory.events
Feb  1 09:55:13 np0005604375 podman[109664]: 2026-02-01 14:55:13.219756891 +0000 UTC m=+0.099227716 container died b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:55:13 np0005604375 podman[109664]: 2026-02-01 14:55:13.137462946 +0000 UTC m=+0.016933781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:55:13 np0005604375 systemd[1]: var-lib-containers-storage-overlay-1254f6b635cada57abec067d12324479992d2c587720476d59341debac36f863-merged.mount: Deactivated successfully.
Feb  1 09:55:13 np0005604375 podman[109664]: 2026-02-01 14:55:13.253030127 +0000 UTC m=+0.132500952 container remove b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 09:55:13 np0005604375 systemd[1]: libpod-conmon-b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca.scope: Deactivated successfully.
Feb  1 09:55:13 np0005604375 podman[109756]: 2026-02-01 14:55:13.38779218 +0000 UTC m=+0.049454850 container create 61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  1 09:55:13 np0005604375 systemd[1]: Started libpod-conmon-61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4.scope.
Feb  1 09:55:13 np0005604375 podman[109756]: 2026-02-01 14:55:13.368317352 +0000 UTC m=+0.029979992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:55:13 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:55:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c112a19723c45c2776f25d35e30114f8645381804530af82e4b47b434ec91b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:55:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c112a19723c45c2776f25d35e30114f8645381804530af82e4b47b434ec91b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:55:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c112a19723c45c2776f25d35e30114f8645381804530af82e4b47b434ec91b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:55:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c112a19723c45c2776f25d35e30114f8645381804530af82e4b47b434ec91b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:55:13 np0005604375 python3.9[109732]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:55:13 np0005604375 podman[109756]: 2026-02-01 14:55:13.501733754 +0000 UTC m=+0.163396434 container init 61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  1 09:55:13 np0005604375 podman[109756]: 2026-02-01 14:55:13.50848968 +0000 UTC m=+0.170152310 container start 61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  1 09:55:13 np0005604375 podman[109756]: 2026-02-01 14:55:13.512381989 +0000 UTC m=+0.174044619 container attach 61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  1 09:55:13 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:14 np0005604375 lvm[109919]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:55:14 np0005604375 lvm[109919]: VG ceph_vg1 finished
Feb  1 09:55:14 np0005604375 lvm[109918]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:55:14 np0005604375 lvm[109918]: VG ceph_vg0 finished
Feb  1 09:55:14 np0005604375 lvm[109921]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:55:14 np0005604375 lvm[109921]: VG ceph_vg2 finished
Feb  1 09:55:14 np0005604375 nostalgic_mclean[109773]: {}
Feb  1 09:55:14 np0005604375 systemd[1]: libpod-61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4.scope: Deactivated successfully.
Feb  1 09:55:14 np0005604375 systemd[1]: libpod-61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4.scope: Consumed 1.081s CPU time.
Feb  1 09:55:14 np0005604375 podman[109756]: 2026-02-01 14:55:14.26806425 +0000 UTC m=+0.929726880 container died 61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:55:14 np0005604375 systemd[1]: var-lib-containers-storage-overlay-23c112a19723c45c2776f25d35e30114f8645381804530af82e4b47b434ec91b-merged.mount: Deactivated successfully.
Feb  1 09:55:14 np0005604375 podman[109756]: 2026-02-01 14:55:14.319942125 +0000 UTC m=+0.981604745 container remove 61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:55:14 np0005604375 systemd[1]: libpod-conmon-61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4.scope: Deactivated successfully.
Feb  1 09:55:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:55:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:55:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:55:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:55:14 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.c scrub starts
Feb  1 09:55:14 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.c scrub ok
Feb  1 09:55:14 np0005604375 python3.9[110087]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:55:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:55:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:55:15 np0005604375 systemd[1]: session-36.scope: Deactivated successfully.
Feb  1 09:55:15 np0005604375 systemd[1]: session-36.scope: Consumed 1.956s CPU time.
Feb  1 09:55:15 np0005604375 systemd-logind[786]: Session 36 logged out. Waiting for processes to exit.
Feb  1 09:55:15 np0005604375 systemd-logind[786]: Removed session 36.
Feb  1 09:55:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:55:15 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:16 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Feb  1 09:55:16 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Feb  1 09:55:17 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.e scrub starts
Feb  1 09:55:17 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.e scrub ok
Feb  1 09:55:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:55:17
Feb  1 09:55:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 09:55:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 09:55:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', 'default.rgw.meta', '.mgr', 'images', 'vms', 'default.rgw.control']
Feb  1 09:55:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 09:55:17 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:55:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:55:19 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Feb  1 09:55:19 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Feb  1 09:55:19 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:19 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Feb  1 09:55:19 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Feb  1 09:55:20 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.b scrub starts
Feb  1 09:55:20 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.b scrub ok
Feb  1 09:55:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:55:20 np0005604375 systemd-logind[786]: New session 37 of user zuul.
Feb  1 09:55:20 np0005604375 systemd[1]: Started Session 37 of User zuul.
Feb  1 09:55:21 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:21 np0005604375 python3.9[110267]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:55:22 np0005604375 python3.9[110421]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:55:23 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Feb  1 09:55:23 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Feb  1 09:55:23 np0005604375 python3.9[110577]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:55:23 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:24 np0005604375 python3.9[110661]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:55:24 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Feb  1 09:55:24 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Feb  1 09:55:25 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Feb  1 09:55:25 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Feb  1 09:55:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:55:25 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:26 np0005604375 python3.9[110814]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:55:27 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Feb  1 09:55:27 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Feb  1 09:55:27 np0005604375 python3.9[111009]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:55:27 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 09:55:28 np0005604375 python3.9[111161]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:55:28 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Feb  1 09:55:28 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Feb  1 09:55:28 np0005604375 python3.9[111326]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:55:28 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Feb  1 09:55:28 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Feb  1 09:55:29 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Feb  1 09:55:29 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Feb  1 09:55:29 np0005604375 python3.9[111404]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:55:29 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:29 np0005604375 python3.9[111556]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:55:30 np0005604375 python3.9[111634]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:55:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:55:31 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Feb  1 09:55:31 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Feb  1 09:55:31 np0005604375 python3.9[111786]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:55:31 np0005604375 python3.9[111938]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:55:31 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:32 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.c scrub starts
Feb  1 09:55:32 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.c scrub ok
Feb  1 09:55:32 np0005604375 python3.9[112090]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:55:32 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.d scrub starts
Feb  1 09:55:32 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.d scrub ok
Feb  1 09:55:32 np0005604375 python3.9[112242]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:55:32 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Feb  1 09:55:32 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Feb  1 09:55:33 np0005604375 python3.9[112394]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:55:33 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:34 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Feb  1 09:55:34 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Feb  1 09:55:35 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Feb  1 09:55:35 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Feb  1 09:55:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:55:35 np0005604375 python3.9[112547]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:55:35 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:36 np0005604375 python3.9[112701]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:55:36 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Feb  1 09:55:36 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Feb  1 09:55:36 np0005604375 python3.9[112853]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:55:37 np0005604375 python3.9[113005]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:55:37 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:38 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Feb  1 09:55:38 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Feb  1 09:55:38 np0005604375 python3.9[113158]: ansible-service_facts Invoked
Feb  1 09:55:38 np0005604375 network[113175]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  1 09:55:38 np0005604375 network[113176]: 'network-scripts' will be removed from distribution in near future.
Feb  1 09:55:38 np0005604375 network[113177]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  1 09:55:39 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.d scrub starts
Feb  1 09:55:39 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.d scrub ok
Feb  1 09:55:39 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:40 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Feb  1 09:55:40 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Feb  1 09:55:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:55:41 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Feb  1 09:55:41 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Feb  1 09:55:41 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.a scrub starts
Feb  1 09:55:41 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.a scrub ok
Feb  1 09:55:41 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:42 np0005604375 python3.9[113629]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:55:42 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Feb  1 09:55:42 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Feb  1 09:55:43 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:44 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.f scrub starts
Feb  1 09:55:44 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.f scrub ok
Feb  1 09:55:44 np0005604375 python3.9[113782]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb  1 09:55:45 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Feb  1 09:55:45 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Feb  1 09:55:45 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.a scrub starts
Feb  1 09:55:45 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.a scrub ok
Feb  1 09:55:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:55:45 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:45 np0005604375 python3.9[113934]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:55:46 np0005604375 python3.9[114012]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:55:47 np0005604375 python3.9[114164]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:55:47 np0005604375 python3.9[114242]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:55:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:48 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.f scrub starts
Feb  1 09:55:48 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.f scrub ok
Feb  1 09:55:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:55:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:55:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:55:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:55:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:55:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:55:49 np0005604375 python3.9[114395]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:55:49 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Feb  1 09:55:49 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Feb  1 09:55:49 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:50 np0005604375 python3.9[114547]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:55:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:55:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.e scrub starts
Feb  1 09:55:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.e scrub ok
Feb  1 09:55:51 np0005604375 python3.9[114631]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:55:51 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:51 np0005604375 systemd[76558]: Created slice User Background Tasks Slice.
Feb  1 09:55:51 np0005604375 systemd[76558]: Starting Cleanup of User's Temporary Files and Directories...
Feb  1 09:55:51 np0005604375 systemd[76558]: Finished Cleanup of User's Temporary Files and Directories.
Feb  1 09:55:52 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Feb  1 09:55:52 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Feb  1 09:55:52 np0005604375 systemd[1]: session-37.scope: Deactivated successfully.
Feb  1 09:55:52 np0005604375 systemd[1]: session-37.scope: Consumed 21.201s CPU time.
Feb  1 09:55:52 np0005604375 systemd-logind[786]: Session 37 logged out. Waiting for processes to exit.
Feb  1 09:55:52 np0005604375 systemd-logind[786]: Removed session 37.
Feb  1 09:55:53 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:54 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Feb  1 09:55:54 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Feb  1 09:55:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:55:55 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:56 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Feb  1 09:55:56 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Feb  1 09:55:57 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Feb  1 09:55:57 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Feb  1 09:55:57 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Feb  1 09:55:57 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Feb  1 09:55:57 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Feb  1 09:55:57 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Feb  1 09:55:57 np0005604375 systemd-logind[786]: New session 38 of user zuul.
Feb  1 09:55:57 np0005604375 systemd[1]: Started Session 38 of User zuul.
Feb  1 09:55:57 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:58 np0005604375 python3.9[114814]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:55:58 np0005604375 python3.9[114966]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:55:59 np0005604375 python3.9[115044]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:55:59 np0005604375 systemd[1]: session-38.scope: Deactivated successfully.
Feb  1 09:55:59 np0005604375 systemd[1]: session-38.scope: Consumed 1.415s CPU time.
Feb  1 09:55:59 np0005604375 systemd-logind[786]: Session 38 logged out. Waiting for processes to exit.
Feb  1 09:55:59 np0005604375 systemd-logind[786]: Removed session 38.
Feb  1 09:55:59 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:55:59 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.f scrub starts
Feb  1 09:56:00 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.f scrub ok
Feb  1 09:56:00 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Feb  1 09:56:00 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Feb  1 09:56:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:56:01 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Feb  1 09:56:01 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Feb  1 09:56:01 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Feb  1 09:56:01 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Feb  1 09:56:01 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:01 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Feb  1 09:56:02 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Feb  1 09:56:02 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Feb  1 09:56:02 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Feb  1 09:56:03 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:03 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Feb  1 09:56:03 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Feb  1 09:56:05 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Feb  1 09:56:05 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Feb  1 09:56:05 np0005604375 systemd-logind[786]: New session 39 of user zuul.
Feb  1 09:56:05 np0005604375 systemd[1]: Started Session 39 of User zuul.
Feb  1 09:56:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:56:05 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:06 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Feb  1 09:56:06 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Feb  1 09:56:06 np0005604375 python3.9[115222]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:56:07 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Feb  1 09:56:07 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Feb  1 09:56:07 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Feb  1 09:56:07 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Feb  1 09:56:07 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Feb  1 09:56:07 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Feb  1 09:56:07 np0005604375 python3.9[115378]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:56:07 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:08 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Feb  1 09:56:08 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Feb  1 09:56:08 np0005604375 python3.9[115553]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:56:08 np0005604375 python3.9[115631]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.hwlpz2yw recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:56:09 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Feb  1 09:56:09 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Feb  1 09:56:09 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Feb  1 09:56:09 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Feb  1 09:56:09 np0005604375 python3.9[115783]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:56:09 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:10 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Feb  1 09:56:10 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Feb  1 09:56:10 np0005604375 python3.9[115861]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.qgg52tl8 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:56:10 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Feb  1 09:56:10 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Feb  1 09:56:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:56:10 np0005604375 python3.9[116013]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:56:11 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Feb  1 09:56:11 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Feb  1 09:56:11 np0005604375 python3.9[116165]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:56:11 np0005604375 python3.9[116243]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:56:11 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:12 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Feb  1 09:56:12 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Feb  1 09:56:12 np0005604375 python3.9[116395]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:56:12 np0005604375 python3.9[116473]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:56:13 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Feb  1 09:56:13 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Feb  1 09:56:13 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Feb  1 09:56:13 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Feb  1 09:56:13 np0005604375 python3.9[116625]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:56:13 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Feb  1 09:56:13 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Feb  1 09:56:13 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:13 np0005604375 python3.9[116777]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:56:14 np0005604375 python3.9[116855]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:56:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:56:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:56:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:56:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:56:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:56:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:56:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 09:56:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 09:56:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 09:56:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:56:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:56:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:56:15 np0005604375 python3.9[117076]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:56:15 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Feb  1 09:56:15 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Feb  1 09:56:15 np0005604375 podman[117229]: 2026-02-01 14:56:15.391163494 +0000 UTC m=+0.041937454 container create cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:56:15 np0005604375 systemd[1]: Started libpod-conmon-cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a.scope.
Feb  1 09:56:15 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:56:15 np0005604375 podman[117229]: 2026-02-01 14:56:15.442342382 +0000 UTC m=+0.093116352 container init cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  1 09:56:15 np0005604375 podman[117229]: 2026-02-01 14:56:15.447733982 +0000 UTC m=+0.098507972 container start cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:56:15 np0005604375 nice_ardinghelli[117247]: 167 167
Feb  1 09:56:15 np0005604375 systemd[1]: libpod-cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a.scope: Deactivated successfully.
Feb  1 09:56:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:56:15 np0005604375 podman[117229]: 2026-02-01 14:56:15.451124812 +0000 UTC m=+0.101898792 container attach cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  1 09:56:15 np0005604375 podman[117229]: 2026-02-01 14:56:15.452014209 +0000 UTC m=+0.102788199 container died cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  1 09:56:15 np0005604375 systemd[1]: var-lib-containers-storage-overlay-775e359df296d55538c6904c6f8c574b881d2cb432dda41b5ff60a8b40982f6f-merged.mount: Deactivated successfully.
Feb  1 09:56:15 np0005604375 podman[117229]: 2026-02-01 14:56:15.37754022 +0000 UTC m=+0.028314190 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:56:15 np0005604375 podman[117229]: 2026-02-01 14:56:15.491912692 +0000 UTC m=+0.142686652 container remove cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  1 09:56:15 np0005604375 systemd[1]: libpod-conmon-cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a.scope: Deactivated successfully.
Feb  1 09:56:15 np0005604375 python3.9[117218]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:56:15 np0005604375 podman[117276]: 2026-02-01 14:56:15.601262695 +0000 UTC m=+0.038844963 container create 7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chaum, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Feb  1 09:56:15 np0005604375 systemd[1]: Started libpod-conmon-7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f.scope.
Feb  1 09:56:15 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:56:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6364105f03cc651f4aaf2a33ea80dc2a70ac9489de36279726fcb768406c7323/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:56:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6364105f03cc651f4aaf2a33ea80dc2a70ac9489de36279726fcb768406c7323/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:56:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6364105f03cc651f4aaf2a33ea80dc2a70ac9489de36279726fcb768406c7323/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:56:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6364105f03cc651f4aaf2a33ea80dc2a70ac9489de36279726fcb768406c7323/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:56:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6364105f03cc651f4aaf2a33ea80dc2a70ac9489de36279726fcb768406c7323/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:56:15 np0005604375 podman[117276]: 2026-02-01 14:56:15.58659688 +0000 UTC m=+0.024179168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:56:15 np0005604375 podman[117276]: 2026-02-01 14:56:15.701449036 +0000 UTC m=+0.139031344 container init 7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  1 09:56:15 np0005604375 podman[117276]: 2026-02-01 14:56:15.707104443 +0000 UTC m=+0.144686711 container start 7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:56:15 np0005604375 podman[117276]: 2026-02-01 14:56:15.714042379 +0000 UTC m=+0.151624647 container attach 7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chaum, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 09:56:15 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:16 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:56:16 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:56:16 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:56:16 np0005604375 blissful_chaum[117313]: --> passed data devices: 0 physical, 3 LVM
Feb  1 09:56:16 np0005604375 blissful_chaum[117313]: --> All data devices are unavailable
Feb  1 09:56:16 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Feb  1 09:56:16 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Feb  1 09:56:16 np0005604375 systemd[1]: libpod-7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f.scope: Deactivated successfully.
Feb  1 09:56:16 np0005604375 podman[117276]: 2026-02-01 14:56:16.126976894 +0000 UTC m=+0.564559192 container died 7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  1 09:56:16 np0005604375 systemd[1]: var-lib-containers-storage-overlay-6364105f03cc651f4aaf2a33ea80dc2a70ac9489de36279726fcb768406c7323-merged.mount: Deactivated successfully.
Feb  1 09:56:16 np0005604375 podman[117276]: 2026-02-01 14:56:16.167230308 +0000 UTC m=+0.604812576 container remove 7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  1 09:56:16 np0005604375 systemd[1]: libpod-conmon-7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f.scope: Deactivated successfully.
Feb  1 09:56:16 np0005604375 python3.9[117474]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:56:16 np0005604375 systemd[1]: Reloading.
Feb  1 09:56:16 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:56:16 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:56:16 np0005604375 podman[117572]: 2026-02-01 14:56:16.599645191 +0000 UTC m=+0.040177922 container create 1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:56:16 np0005604375 podman[117572]: 2026-02-01 14:56:16.576182445 +0000 UTC m=+0.016715146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:56:16 np0005604375 systemd[1]: Started libpod-conmon-1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7.scope.
Feb  1 09:56:16 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:56:16 np0005604375 podman[117572]: 2026-02-01 14:56:16.738851069 +0000 UTC m=+0.179383840 container init 1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  1 09:56:16 np0005604375 podman[117572]: 2026-02-01 14:56:16.746069573 +0000 UTC m=+0.186602254 container start 1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  1 09:56:16 np0005604375 podman[117572]: 2026-02-01 14:56:16.749406062 +0000 UTC m=+0.189938833 container attach 1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  1 09:56:16 np0005604375 great_cray[117587]: 167 167
Feb  1 09:56:16 np0005604375 podman[117572]: 2026-02-01 14:56:16.752232776 +0000 UTC m=+0.192765497 container died 1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:56:16 np0005604375 systemd[1]: libpod-1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7.scope: Deactivated successfully.
Feb  1 09:56:16 np0005604375 systemd[1]: var-lib-containers-storage-overlay-851627d0867326904f3e1771a30bfe83e47a32c577158af70a41ce7bef5b37ac-merged.mount: Deactivated successfully.
Feb  1 09:56:16 np0005604375 podman[117572]: 2026-02-01 14:56:16.794496229 +0000 UTC m=+0.235028910 container remove 1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Feb  1 09:56:16 np0005604375 systemd[1]: libpod-conmon-1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7.scope: Deactivated successfully.
Feb  1 09:56:16 np0005604375 podman[117637]: 2026-02-01 14:56:16.981028321 +0000 UTC m=+0.054270291 container create dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  1 09:56:17 np0005604375 systemd[1]: Started libpod-conmon-dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85.scope.
Feb  1 09:56:17 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:56:17 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95ca774324aa5b9d37e0af3e8f234c16cc5eeee6db44d324c78d6921987249b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:56:17 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95ca774324aa5b9d37e0af3e8f234c16cc5eeee6db44d324c78d6921987249b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:56:17 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95ca774324aa5b9d37e0af3e8f234c16cc5eeee6db44d324c78d6921987249b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:56:17 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95ca774324aa5b9d37e0af3e8f234c16cc5eeee6db44d324c78d6921987249b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:56:17 np0005604375 podman[117637]: 2026-02-01 14:56:17.048372118 +0000 UTC m=+0.121614128 container init dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:56:17 np0005604375 podman[117637]: 2026-02-01 14:56:17.055128008 +0000 UTC m=+0.128369978 container start dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:56:17 np0005604375 podman[117637]: 2026-02-01 14:56:16.965236092 +0000 UTC m=+0.038478092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:56:17 np0005604375 podman[117637]: 2026-02-01 14:56:17.06227241 +0000 UTC m=+0.135514430 container attach dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  1 09:56:17 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Feb  1 09:56:17 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Feb  1 09:56:17 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Feb  1 09:56:17 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Feb  1 09:56:17 np0005604375 great_shaw[117705]: {
Feb  1 09:56:17 np0005604375 great_shaw[117705]:    "0": [
Feb  1 09:56:17 np0005604375 great_shaw[117705]:        {
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "devices": [
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "/dev/loop3"
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            ],
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_name": "ceph_lv0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_size": "21470642176",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "name": "ceph_lv0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "tags": {
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.cluster_name": "ceph",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.crush_device_class": "",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.encrypted": "0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.objectstore": "bluestore",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.osd_id": "0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.type": "block",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.vdo": "0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.with_tpm": "0"
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            },
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "type": "block",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "vg_name": "ceph_vg0"
Feb  1 09:56:17 np0005604375 great_shaw[117705]:        }
Feb  1 09:56:17 np0005604375 great_shaw[117705]:    ],
Feb  1 09:56:17 np0005604375 great_shaw[117705]:    "1": [
Feb  1 09:56:17 np0005604375 great_shaw[117705]:        {
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "devices": [
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "/dev/loop4"
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            ],
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_name": "ceph_lv1",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_size": "21470642176",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "name": "ceph_lv1",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "tags": {
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.cluster_name": "ceph",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.crush_device_class": "",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.encrypted": "0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.objectstore": "bluestore",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.osd_id": "1",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.type": "block",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.vdo": "0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.with_tpm": "0"
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            },
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "type": "block",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "vg_name": "ceph_vg1"
Feb  1 09:56:17 np0005604375 great_shaw[117705]:        }
Feb  1 09:56:17 np0005604375 great_shaw[117705]:    ],
Feb  1 09:56:17 np0005604375 great_shaw[117705]:    "2": [
Feb  1 09:56:17 np0005604375 great_shaw[117705]:        {
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "devices": [
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "/dev/loop5"
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            ],
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_name": "ceph_lv2",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_size": "21470642176",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "name": "ceph_lv2",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "tags": {
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.cluster_name": "ceph",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.crush_device_class": "",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.encrypted": "0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.objectstore": "bluestore",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.osd_id": "2",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.type": "block",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.vdo": "0",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:                "ceph.with_tpm": "0"
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            },
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "type": "block",
Feb  1 09:56:17 np0005604375 great_shaw[117705]:            "vg_name": "ceph_vg2"
Feb  1 09:56:17 np0005604375 great_shaw[117705]:        }
Feb  1 09:56:17 np0005604375 great_shaw[117705]:    ]
Feb  1 09:56:17 np0005604375 great_shaw[117705]: }
Feb  1 09:56:17 np0005604375 systemd[1]: libpod-dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85.scope: Deactivated successfully.
Feb  1 09:56:17 np0005604375 conmon[117705]: conmon dec6256718a714252705 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85.scope/container/memory.events
Feb  1 09:56:17 np0005604375 podman[117637]: 2026-02-01 14:56:17.33609348 +0000 UTC m=+0.409335470 container died dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Feb  1 09:56:17 np0005604375 systemd[1]: var-lib-containers-storage-overlay-e95ca774324aa5b9d37e0af3e8f234c16cc5eeee6db44d324c78d6921987249b-merged.mount: Deactivated successfully.
Feb  1 09:56:17 np0005604375 podman[117637]: 2026-02-01 14:56:17.378806617 +0000 UTC m=+0.452048607 container remove dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  1 09:56:17 np0005604375 python3.9[117785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:56:17 np0005604375 systemd[1]: libpod-conmon-dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85.scope: Deactivated successfully.
Feb  1 09:56:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:56:17
Feb  1 09:56:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 09:56:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 09:56:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.meta', 'images', '.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'default.rgw.control', 'volumes']
Feb  1 09:56:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 09:56:17 np0005604375 podman[117940]: 2026-02-01 14:56:17.77651002 +0000 UTC m=+0.053251600 container create d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  1 09:56:17 np0005604375 python3.9[117929]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:56:17 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:17 np0005604375 systemd[1]: Started libpod-conmon-d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600.scope.
Feb  1 09:56:17 np0005604375 podman[117940]: 2026-02-01 14:56:17.754315562 +0000 UTC m=+0.031057122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:56:17 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:56:17 np0005604375 podman[117940]: 2026-02-01 14:56:17.86787766 +0000 UTC m=+0.144619230 container init d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:56:17 np0005604375 podman[117940]: 2026-02-01 14:56:17.874413774 +0000 UTC m=+0.151155334 container start d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:56:17 np0005604375 podman[117940]: 2026-02-01 14:56:17.877822345 +0000 UTC m=+0.154563895 container attach d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  1 09:56:17 np0005604375 eloquent_hellman[117958]: 167 167
Feb  1 09:56:17 np0005604375 systemd[1]: libpod-d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600.scope: Deactivated successfully.
Feb  1 09:56:17 np0005604375 podman[117940]: 2026-02-01 14:56:17.8938539 +0000 UTC m=+0.170595450 container died d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:56:17 np0005604375 systemd[1]: var-lib-containers-storage-overlay-d86fa3950f33da487b70c506788c87cc878b433b48497aedccb7fda6bdcd3cf0-merged.mount: Deactivated successfully.
Feb  1 09:56:17 np0005604375 podman[117940]: 2026-02-01 14:56:17.923011885 +0000 UTC m=+0.199753435 container remove d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  1 09:56:17 np0005604375 systemd[1]: libpod-conmon-d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600.scope: Deactivated successfully.
Feb  1 09:56:18 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Feb  1 09:56:18 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Feb  1 09:56:18 np0005604375 podman[118053]: 2026-02-01 14:56:18.056815712 +0000 UTC m=+0.054403485 container create 9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  1 09:56:18 np0005604375 systemd[1]: Started libpod-conmon-9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6.scope.
Feb  1 09:56:18 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:56:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b60266d076112be23c8eeae2adc6b97af407607191821759b11311a2794f461/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:56:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b60266d076112be23c8eeae2adc6b97af407607191821759b11311a2794f461/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:56:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b60266d076112be23c8eeae2adc6b97af407607191821759b11311a2794f461/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:56:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b60266d076112be23c8eeae2adc6b97af407607191821759b11311a2794f461/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:56:18 np0005604375 podman[118053]: 2026-02-01 14:56:18.038354144 +0000 UTC m=+0.035941977 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:56:18 np0005604375 podman[118053]: 2026-02-01 14:56:18.138579606 +0000 UTC m=+0.136167449 container init 9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  1 09:56:18 np0005604375 podman[118053]: 2026-02-01 14:56:18.145363917 +0000 UTC m=+0.142951700 container start 9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  1 09:56:18 np0005604375 podman[118053]: 2026-02-01 14:56:18.148692256 +0000 UTC m=+0.146280109 container attach 9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_feynman, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:56:18 np0005604375 python3.9[118154]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:56:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:56:18 np0005604375 python3.9[118280]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:56:18 np0005604375 lvm[118307]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:56:18 np0005604375 lvm[118309]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:56:18 np0005604375 lvm[118309]: VG ceph_vg1 finished
Feb  1 09:56:18 np0005604375 lvm[118307]: VG ceph_vg0 finished
Feb  1 09:56:18 np0005604375 lvm[118317]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:56:18 np0005604375 lvm[118317]: VG ceph_vg2 finished
Feb  1 09:56:18 np0005604375 mystifying_feynman[118103]: {}
Feb  1 09:56:18 np0005604375 systemd[1]: libpod-9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6.scope: Deactivated successfully.
Feb  1 09:56:18 np0005604375 podman[118053]: 2026-02-01 14:56:18.929349576 +0000 UTC m=+0.926937349 container died 9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_feynman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:56:18 np0005604375 systemd[1]: libpod-9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6.scope: Consumed 1.135s CPU time.
Feb  1 09:56:18 np0005604375 systemd[1]: var-lib-containers-storage-overlay-9b60266d076112be23c8eeae2adc6b97af407607191821759b11311a2794f461-merged.mount: Deactivated successfully.
Feb  1 09:56:18 np0005604375 podman[118053]: 2026-02-01 14:56:18.969417654 +0000 UTC m=+0.967005437 container remove 9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  1 09:56:18 np0005604375 systemd[1]: libpod-conmon-9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6.scope: Deactivated successfully.
Feb  1 09:56:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:56:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:56:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:56:19 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Feb  1 09:56:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:56:19 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Feb  1 09:56:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:56:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:56:19 np0005604375 python3.9[118501]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:56:19 np0005604375 systemd[1]: Reloading.
Feb  1 09:56:19 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:56:19 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:56:19 np0005604375 systemd[1]: Starting Create netns directory...
Feb  1 09:56:19 np0005604375 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  1 09:56:19 np0005604375 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  1 09:56:19 np0005604375 systemd[1]: Finished Create netns directory.
Feb  1 09:56:19 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:20 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Feb  1 09:56:20 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Feb  1 09:56:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:56:20 np0005604375 python3.9[118691]: ansible-ansible.builtin.service_facts Invoked
Feb  1 09:56:20 np0005604375 network[118708]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  1 09:56:20 np0005604375 network[118709]: 'network-scripts' will be removed from distribution in near future.
Feb  1 09:56:20 np0005604375 network[118710]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  1 09:56:21 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Feb  1 09:56:21 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Feb  1 09:56:21 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:22 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.e scrub starts
Feb  1 09:56:22 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.e scrub ok
Feb  1 09:56:23 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:24 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Feb  1 09:56:24 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Feb  1 09:56:24 np0005604375 python3.9[118973]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:56:25 np0005604375 python3.9[119051]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:56:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:56:25 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:25 np0005604375 python3.9[119203]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:56:26 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.e scrub starts
Feb  1 09:56:26 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.e scrub ok
Feb  1 09:56:26 np0005604375 python3.9[119355]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:56:26 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.f scrub starts
Feb  1 09:56:27 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.f scrub ok
Feb  1 09:56:27 np0005604375 python3.9[119433]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:56:27 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Feb  1 09:56:27 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Feb  1 09:56:27 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 09:56:28 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.d scrub starts
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:56:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 09:56:28 np0005604375 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.d scrub ok
Feb  1 09:56:28 np0005604375 python3.9[119585]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb  1 09:56:28 np0005604375 systemd[1]: Starting Time & Date Service...
Feb  1 09:56:28 np0005604375 systemd[1]: Started Time & Date Service.
Feb  1 09:56:28 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Feb  1 09:56:28 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Feb  1 09:56:28 np0005604375 python3.9[119741]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.163490) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789163644, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7156, "num_deletes": 251, "total_data_size": 9709209, "memory_usage": 9892608, "flush_reason": "Manual Compaction"}
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789206535, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7680689, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7299, "table_properties": {"data_size": 7654229, "index_size": 17321, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 74026, "raw_average_key_size": 23, "raw_value_size": 7592411, "raw_average_value_size": 2371, "num_data_blocks": 762, "num_entries": 3202, "num_filter_entries": 3202, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957400, "oldest_key_time": 1769957400, "file_creation_time": 1769957789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 43113 microseconds, and 19074 cpu microseconds.
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.206608) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7680689 bytes OK
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.206655) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.208377) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.208404) EVENT_LOG_v1 {"time_micros": 1769957789208397, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.208454) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9678222, prev total WAL file size 9678222, number of live WAL files 2.
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.211413) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7500KB) 13(58KB) 8(1944B)]
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789211583, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7742593, "oldest_snapshot_seqno": -1}
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3028 keys, 7695593 bytes, temperature: kUnknown
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789252524, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7695593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7669481, "index_size": 17426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7621, "raw_key_size": 72466, "raw_average_key_size": 23, "raw_value_size": 7608931, "raw_average_value_size": 2512, "num_data_blocks": 768, "num_entries": 3028, "num_filter_entries": 3028, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769957789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.252806) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7695593 bytes
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.254409) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.7 rd, 187.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.4, 0.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3317, records dropped: 289 output_compression: NoCompression
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.254442) EVENT_LOG_v1 {"time_micros": 1769957789254427, "job": 4, "event": "compaction_finished", "compaction_time_micros": 41037, "compaction_time_cpu_micros": 22285, "output_level": 6, "num_output_files": 1, "total_output_size": 7695593, "num_input_records": 3317, "num_output_records": 3028, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789255779, "job": 4, "event": "table_file_deletion", "file_number": 19}
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789255870, "job": 4, "event": "table_file_deletion", "file_number": 13}
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789255916, "job": 4, "event": "table_file_deletion", "file_number": 8}
Feb  1 09:56:29 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.211250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 09:56:29 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Feb  1 09:57:43 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:57:44 np0005604375 python3.9[130682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:57:44 np0005604375 rsyslogd[1001]: imjournal: 970 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb  1 09:57:44 np0005604375 python3.9[130805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957863.6507335-246-56901500974547/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:57:45 np0005604375 python3.9[130957]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:57:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:57:45 np0005604375 python3.9[131109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:57:45 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:57:46 np0005604375 python3.9[131232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957865.4107182-270-97589979076498/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:57:46 np0005604375 python3.9[131384]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:57:47 np0005604375 python3.9[131536]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:57:47 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:57:48 np0005604375 python3.9[131659]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957867.1387234-294-234283495892471/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:57:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:57:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:57:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:57:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:57:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:57:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:57:49 np0005604375 python3.9[131811]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:57:49 np0005604375 python3.9[131963]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:57:49 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:57:50 np0005604375 python3.9[132086]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957869.2526317-318-93359104206061/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:57:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:57:50 np0005604375 python3.9[132238]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:57:51 np0005604375 python3.9[132390]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:57:51 np0005604375 python3.9[132513]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957870.8745797-342-183848414117328/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:57:51 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:57:52 np0005604375 python3.9[132665]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:57:52 np0005604375 python3.9[132817]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:57:53 np0005604375 python3.9[132940]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957872.5339515-366-1286432164278/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:57:53 np0005604375 systemd-logind[786]: Session 43 logged out. Waiting for processes to exit.
Feb  1 09:57:53 np0005604375 systemd[1]: session-43.scope: Deactivated successfully.
Feb  1 09:57:53 np0005604375 systemd[1]: session-43.scope: Consumed 19.458s CPU time.
Feb  1 09:57:53 np0005604375 systemd-logind[786]: Removed session 43.
Feb  1 09:57:53 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:57:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:57:55 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:57:57 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:57:59 np0005604375 systemd-logind[786]: New session 44 of user zuul.
Feb  1 09:57:59 np0005604375 systemd[1]: Started Session 44 of User zuul.
Feb  1 09:57:59 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:00 np0005604375 python3.9[133120]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:58:00 np0005604375 python3.9[133272]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:01 np0005604375 python3.9[133395]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957880.33819-29-54586483380579/.source.conf _original_basename=ceph.conf follow=False checksum=15e400aca5823242b048f6d77e32d66f71f9194c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:01 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:02 np0005604375 python3.9[133547]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:02 np0005604375 python3.9[133670]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957881.7052343-29-128662861448092/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=9e80b5c3ad70771b2808c3ea209191214d8953f2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:02 np0005604375 systemd[1]: session-44.scope: Deactivated successfully.
Feb  1 09:58:02 np0005604375 systemd[1]: session-44.scope: Consumed 2.100s CPU time.
Feb  1 09:58:02 np0005604375 systemd-logind[786]: Session 44 logged out. Waiting for processes to exit.
Feb  1 09:58:02 np0005604375 systemd-logind[786]: Removed session 44.
Feb  1 09:58:03 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:58:05 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:07 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:08 np0005604375 systemd-logind[786]: New session 45 of user zuul.
Feb  1 09:58:08 np0005604375 systemd[1]: Started Session 45 of User zuul.
Feb  1 09:58:09 np0005604375 python3.9[133848]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:58:09 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:58:10 np0005604375 python3.9[134004]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:58:11 np0005604375 python3.9[134156]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:58:11 np0005604375 python3.9[134306]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:58:11 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:12 np0005604375 python3.9[134458]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb  1 09:58:13 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:14 np0005604375 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Feb  1 09:58:14 np0005604375 python3.9[134614]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:58:15 np0005604375 python3.9[134698]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:58:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:58:15 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:17 np0005604375 python3.9[134851]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  1 09:58:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:58:17
Feb  1 09:58:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 09:58:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 09:58:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'backups', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta']
Feb  1 09:58:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 09:58:17 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:58:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:58:19 np0005604375 python3[135006]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Feb  1 09:58:19 np0005604375 python3.9[135158]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:19 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:58:20 np0005604375 python3.9[135310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:20 np0005604375 python3.9[135388]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:21 np0005604375 python3.9[135540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:21 np0005604375 python3.9[135618]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9gia48h1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:21 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:22 np0005604375 python3.9[135770]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:22 np0005604375 python3.9[135848]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:23 np0005604375 python3.9[136000]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:58:23 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:24 np0005604375 python3[136203]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:58:24 np0005604375 podman[136451]: 2026-02-01 14:58:24.732170668 +0000 UTC m=+0.041165148 container create ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  1 09:58:24 np0005604375 systemd[1]: Started libpod-conmon-ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895.scope.
Feb  1 09:58:24 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:58:24 np0005604375 podman[136451]: 2026-02-01 14:58:24.794324194 +0000 UTC m=+0.103318674 container init ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_turing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  1 09:58:24 np0005604375 podman[136451]: 2026-02-01 14:58:24.800142927 +0000 UTC m=+0.109137457 container start ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_turing, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  1 09:58:24 np0005604375 podman[136451]: 2026-02-01 14:58:24.80415021 +0000 UTC m=+0.113144730 container attach ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_turing, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:58:24 np0005604375 determined_turing[136468]: 167 167
Feb  1 09:58:24 np0005604375 systemd[1]: libpod-ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895.scope: Deactivated successfully.
Feb  1 09:58:24 np0005604375 podman[136451]: 2026-02-01 14:58:24.805459647 +0000 UTC m=+0.114454137 container died ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:58:24 np0005604375 podman[136451]: 2026-02-01 14:58:24.718348549 +0000 UTC m=+0.027343049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:58:24 np0005604375 python3.9[136437]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:24 np0005604375 systemd[1]: var-lib-containers-storage-overlay-121a7840aa8c89d66f92a18da3686e91979e3f31be00cd0723388d600741e889-merged.mount: Deactivated successfully.
Feb  1 09:58:24 np0005604375 podman[136451]: 2026-02-01 14:58:24.853416014 +0000 UTC m=+0.162410504 container remove ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:58:24 np0005604375 systemd[1]: libpod-conmon-ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895.scope: Deactivated successfully.
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:58:24 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:58:24 np0005604375 podman[136520]: 2026-02-01 14:58:24.989646571 +0000 UTC m=+0.044704657 container create 63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_ardinghelli, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  1 09:58:25 np0005604375 systemd[1]: Started libpod-conmon-63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8.scope.
Feb  1 09:58:25 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:58:25 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e54a66320cd8fda393a6f3db173d235d822e236160321e36e17f885751deb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:58:25 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e54a66320cd8fda393a6f3db173d235d822e236160321e36e17f885751deb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:58:25 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e54a66320cd8fda393a6f3db173d235d822e236160321e36e17f885751deb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:58:25 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e54a66320cd8fda393a6f3db173d235d822e236160321e36e17f885751deb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:58:25 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e54a66320cd8fda393a6f3db173d235d822e236160321e36e17f885751deb6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:58:25 np0005604375 podman[136520]: 2026-02-01 14:58:24.972093728 +0000 UTC m=+0.027151834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:58:25 np0005604375 podman[136520]: 2026-02-01 14:58:25.076771359 +0000 UTC m=+0.131829505 container init 63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:58:25 np0005604375 podman[136520]: 2026-02-01 14:58:25.081253505 +0000 UTC m=+0.136311601 container start 63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_ardinghelli, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  1 09:58:25 np0005604375 podman[136520]: 2026-02-01 14:58:25.084508786 +0000 UTC m=+0.139566892 container attach 63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_ardinghelli, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb  1 09:58:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:58:25 np0005604375 python3.9[136640]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957904.369393-152-248137144223922/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:25 np0005604375 sleepy_ardinghelli[136558]: --> passed data devices: 0 physical, 3 LVM
Feb  1 09:58:25 np0005604375 sleepy_ardinghelli[136558]: --> All data devices are unavailable
Feb  1 09:58:25 np0005604375 systemd[1]: libpod-63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8.scope: Deactivated successfully.
Feb  1 09:58:25 np0005604375 podman[136520]: 2026-02-01 14:58:25.620212677 +0000 UTC m=+0.675270803 container died 63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  1 09:58:25 np0005604375 systemd[1]: var-lib-containers-storage-overlay-97e54a66320cd8fda393a6f3db173d235d822e236160321e36e17f885751deb6-merged.mount: Deactivated successfully.
Feb  1 09:58:25 np0005604375 podman[136520]: 2026-02-01 14:58:25.670469468 +0000 UTC m=+0.725527584 container remove 63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  1 09:58:25 np0005604375 systemd[1]: libpod-conmon-63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8.scope: Deactivated successfully.
Feb  1 09:58:25 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:26 np0005604375 podman[136881]: 2026-02-01 14:58:26.102128006 +0000 UTC m=+0.056421897 container create 784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_haslett, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  1 09:58:26 np0005604375 systemd[1]: Started libpod-conmon-784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0.scope.
Feb  1 09:58:26 np0005604375 podman[136881]: 2026-02-01 14:58:26.070921109 +0000 UTC m=+0.025215050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:58:26 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:58:26 np0005604375 podman[136881]: 2026-02-01 14:58:26.195032356 +0000 UTC m=+0.149326257 container init 784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:58:26 np0005604375 podman[136881]: 2026-02-01 14:58:26.203082192 +0000 UTC m=+0.157376083 container start 784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  1 09:58:26 np0005604375 podman[136881]: 2026-02-01 14:58:26.207028753 +0000 UTC m=+0.161322624 container attach 784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_haslett, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  1 09:58:26 np0005604375 nervous_haslett[136898]: 167 167
Feb  1 09:58:26 np0005604375 systemd[1]: libpod-784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0.scope: Deactivated successfully.
Feb  1 09:58:26 np0005604375 conmon[136898]: conmon 784d74a350923ff52e92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0.scope/container/memory.events
Feb  1 09:58:26 np0005604375 podman[136881]: 2026-02-01 14:58:26.21228105 +0000 UTC m=+0.166574911 container died 784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb  1 09:58:26 np0005604375 systemd[1]: var-lib-containers-storage-overlay-268be11ff5e90f2ac3592089f7b8fc61340a2ed2fb34c10bc548079565c18ed1-merged.mount: Deactivated successfully.
Feb  1 09:58:26 np0005604375 podman[136881]: 2026-02-01 14:58:26.24857989 +0000 UTC m=+0.202873751 container remove 784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb  1 09:58:26 np0005604375 python3.9[136879]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:26 np0005604375 systemd[1]: libpod-conmon-784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0.scope: Deactivated successfully.
Feb  1 09:58:26 np0005604375 podman[136924]: 2026-02-01 14:58:26.374194319 +0000 UTC m=+0.048435921 container create b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:58:26 np0005604375 systemd[1]: Started libpod-conmon-b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2.scope.
Feb  1 09:58:26 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:58:26 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286e64572da4637cab9ebe1425bfca924362c4f68d396c22390e5abbdda984f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:58:26 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286e64572da4637cab9ebe1425bfca924362c4f68d396c22390e5abbdda984f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:58:26 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286e64572da4637cab9ebe1425bfca924362c4f68d396c22390e5abbdda984f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:58:26 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286e64572da4637cab9ebe1425bfca924362c4f68d396c22390e5abbdda984f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:58:26 np0005604375 podman[136924]: 2026-02-01 14:58:26.353443066 +0000 UTC m=+0.027684678 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:58:26 np0005604375 podman[136924]: 2026-02-01 14:58:26.463148138 +0000 UTC m=+0.137389730 container init b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_borg, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:58:26 np0005604375 podman[136924]: 2026-02-01 14:58:26.471812392 +0000 UTC m=+0.146053964 container start b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_borg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  1 09:58:26 np0005604375 podman[136924]: 2026-02-01 14:58:26.47531055 +0000 UTC m=+0.149552142 container attach b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]: {
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:    "0": [
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:        {
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "devices": [
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "/dev/loop3"
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            ],
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_name": "ceph_lv0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_size": "21470642176",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "name": "ceph_lv0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "tags": {
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.cluster_name": "ceph",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.crush_device_class": "",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.encrypted": "0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.objectstore": "bluestore",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.osd_id": "0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.type": "block",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.vdo": "0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.with_tpm": "0"
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            },
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "type": "block",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "vg_name": "ceph_vg0"
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:        }
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:    ],
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:    "1": [
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:        {
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "devices": [
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "/dev/loop4"
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            ],
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_name": "ceph_lv1",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_size": "21470642176",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "name": "ceph_lv1",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "tags": {
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.cluster_name": "ceph",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.crush_device_class": "",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.encrypted": "0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.objectstore": "bluestore",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.osd_id": "1",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.type": "block",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.vdo": "0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.with_tpm": "0"
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            },
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "type": "block",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "vg_name": "ceph_vg1"
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:        }
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:    ],
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:    "2": [
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:        {
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "devices": [
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "/dev/loop5"
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            ],
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_name": "ceph_lv2",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_size": "21470642176",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "name": "ceph_lv2",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "tags": {
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.cluster_name": "ceph",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.crush_device_class": "",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.encrypted": "0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.objectstore": "bluestore",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.osd_id": "2",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.type": "block",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.vdo": "0",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:                "ceph.with_tpm": "0"
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            },
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "type": "block",
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:            "vg_name": "ceph_vg2"
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:        }
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]:    ]
Feb  1 09:58:26 np0005604375 heuristic_borg[136987]: }
Feb  1 09:58:26 np0005604375 systemd[1]: libpod-b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2.scope: Deactivated successfully.
Feb  1 09:58:26 np0005604375 podman[136924]: 2026-02-01 14:58:26.719745938 +0000 UTC m=+0.393987520 container died b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_borg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  1 09:58:26 np0005604375 systemd[1]: var-lib-containers-storage-overlay-286e64572da4637cab9ebe1425bfca924362c4f68d396c22390e5abbdda984f2-merged.mount: Deactivated successfully.
Feb  1 09:58:26 np0005604375 podman[136924]: 2026-02-01 14:58:26.764556096 +0000 UTC m=+0.438797698 container remove b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_borg, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:58:26 np0005604375 systemd[1]: libpod-conmon-b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2.scope: Deactivated successfully.
Feb  1 09:58:26 np0005604375 python3.9[137069]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957905.715483-167-38190679384665/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:27 np0005604375 podman[137222]: 2026-02-01 14:58:27.105212376 +0000 UTC m=+0.029660644 container create 502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:58:27 np0005604375 systemd[1]: Started libpod-conmon-502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9.scope.
Feb  1 09:58:27 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:58:27 np0005604375 podman[137222]: 2026-02-01 14:58:27.167094935 +0000 UTC m=+0.091543253 container init 502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bardeen, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  1 09:58:27 np0005604375 podman[137222]: 2026-02-01 14:58:27.172024823 +0000 UTC m=+0.096473101 container start 502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bardeen, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:58:27 np0005604375 wizardly_bardeen[137261]: 167 167
Feb  1 09:58:27 np0005604375 systemd[1]: libpod-502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9.scope: Deactivated successfully.
Feb  1 09:58:27 np0005604375 podman[137222]: 2026-02-01 14:58:27.178117304 +0000 UTC m=+0.102565622 container attach 502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:58:27 np0005604375 podman[137222]: 2026-02-01 14:58:27.178415223 +0000 UTC m=+0.102863511 container died 502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bardeen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Feb  1 09:58:27 np0005604375 podman[137222]: 2026-02-01 14:58:27.092003255 +0000 UTC m=+0.016451553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:58:27 np0005604375 systemd[1]: var-lib-containers-storage-overlay-e05fe6d3c3149f3a42499f462b57e551bbd6b85f443434e624ee93bfccd3103d-merged.mount: Deactivated successfully.
Feb  1 09:58:27 np0005604375 podman[137222]: 2026-02-01 14:58:27.212678245 +0000 UTC m=+0.137126513 container remove 502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bardeen, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 09:58:27 np0005604375 systemd[1]: libpod-conmon-502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9.scope: Deactivated successfully.
Feb  1 09:58:27 np0005604375 podman[137338]: 2026-02-01 14:58:27.328371716 +0000 UTC m=+0.033321087 container create ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:58:27 np0005604375 systemd[1]: Started libpod-conmon-ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d.scope.
Feb  1 09:58:27 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:58:27 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a589328c595e4967684b40f30ccaef3f9f85245b16a15beb1f6d6c92e02755/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:58:27 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a589328c595e4967684b40f30ccaef3f9f85245b16a15beb1f6d6c92e02755/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:58:27 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a589328c595e4967684b40f30ccaef3f9f85245b16a15beb1f6d6c92e02755/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:58:27 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a589328c595e4967684b40f30ccaef3f9f85245b16a15beb1f6d6c92e02755/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:58:27 np0005604375 podman[137338]: 2026-02-01 14:58:27.402171829 +0000 UTC m=+0.107121260 container init ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:58:27 np0005604375 podman[137338]: 2026-02-01 14:58:27.312674355 +0000 UTC m=+0.017623746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:58:27 np0005604375 podman[137338]: 2026-02-01 14:58:27.410108852 +0000 UTC m=+0.115058233 container start ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:58:27 np0005604375 podman[137338]: 2026-02-01 14:58:27.413120747 +0000 UTC m=+0.118070168 container attach ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 09:58:27 np0005604375 python3.9[137332]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:27 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:27 np0005604375 python3.9[137518]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957906.9877667-182-62597301555505/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:27 np0005604375 lvm[137557]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:58:27 np0005604375 lvm[137557]: VG ceph_vg0 finished
Feb  1 09:58:27 np0005604375 lvm[137559]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:58:27 np0005604375 lvm[137559]: VG ceph_vg1 finished
Feb  1 09:58:27 np0005604375 lvm[137562]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:58:27 np0005604375 lvm[137562]: VG ceph_vg2 finished
Feb  1 09:58:27 np0005604375 lvm[137586]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:58:27 np0005604375 lvm[137586]: VG ceph_vg0 finished
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:58:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 09:58:28 np0005604375 gifted_faraday[137355]: {}
Feb  1 09:58:28 np0005604375 systemd[1]: libpod-ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d.scope: Deactivated successfully.
Feb  1 09:58:28 np0005604375 podman[137338]: 2026-02-01 14:58:28.079022055 +0000 UTC m=+0.783971436 container died ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_faraday, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:58:28 np0005604375 systemd[1]: var-lib-containers-storage-overlay-d7a589328c595e4967684b40f30ccaef3f9f85245b16a15beb1f6d6c92e02755-merged.mount: Deactivated successfully.
Feb  1 09:58:28 np0005604375 podman[137338]: 2026-02-01 14:58:28.113288258 +0000 UTC m=+0.818237629 container remove ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  1 09:58:28 np0005604375 systemd[1]: libpod-conmon-ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d.scope: Deactivated successfully.
Feb  1 09:58:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:58:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:58:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:58:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:58:28 np0005604375 python3.9[137751]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:28 np0005604375 python3.9[137876]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957908.0846329-197-64754111571843/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:58:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:58:29 np0005604375 python3.9[138028]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:29 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:30 np0005604375 python3.9[138153]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957909.1708248-212-183959532752015/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:58:30 np0005604375 python3.9[138305]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:31 np0005604375 python3.9[138457]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:58:31 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:32 np0005604375 python3.9[138612]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:32 np0005604375 python3.9[138764]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:58:33 np0005604375 python3.9[138917]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:58:33 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:34 np0005604375 python3.9[139071]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:58:34 np0005604375 python3.9[139226]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:58:35 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:36 np0005604375 python3.9[139376]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:58:37 np0005604375 python3.9[139529]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:58:37 np0005604375 ovs-vsctl[139530]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Feb  1 09:58:37 np0005604375 python3.9[139682]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:58:37 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:38 np0005604375 python3.9[139837]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:58:38 np0005604375 ovs-vsctl[139838]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Feb  1 09:58:39 np0005604375 python3.9[139988]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:58:39 np0005604375 python3.9[140142]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:58:39 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:40 np0005604375 python3.9[140294]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:58:40 np0005604375 python3.9[140372]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:58:41 np0005604375 python3.9[140524]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:41 np0005604375 python3.9[140602]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:58:41 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:42 np0005604375 python3.9[140754]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:42 np0005604375 python3.9[140906]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:43 np0005604375 python3.9[140984]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:43 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:43 np0005604375 python3.9[141136]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:44 np0005604375 python3.9[141214]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:45 np0005604375 python3.9[141366]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:58:45 np0005604375 systemd[1]: Reloading.
Feb  1 09:58:45 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:58:45 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:58:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:58:45 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:45 np0005604375 python3.9[141558]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:46 np0005604375 python3.9[141636]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:46 np0005604375 python3.9[141788]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:47 np0005604375 python3.9[141866]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:47 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:48 np0005604375 python3.9[142018]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:58:48 np0005604375 systemd[1]: Reloading.
Feb  1 09:58:48 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:58:48 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:58:48 np0005604375 systemd[1]: Starting Create netns directory...
Feb  1 09:58:48 np0005604375 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  1 09:58:48 np0005604375 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  1 09:58:48 np0005604375 systemd[1]: Finished Create netns directory.
Feb  1 09:58:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:58:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:58:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:58:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:58:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:58:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:58:49 np0005604375 python3.9[142211]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:58:49 np0005604375 python3.9[142363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:49 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:50 np0005604375 python3.9[142486]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957929.1750705-463-132301564522633/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:58:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:58:50 np0005604375 python3.9[142638]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:51 np0005604375 python3.9[142790]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:58:51 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:51 np0005604375 python3.9[142942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:58:52 np0005604375 python3.9[143065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957931.6012402-496-34135964142153/.source.json _original_basename=.t_4rgi8d follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:53 np0005604375 python3.9[143215]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:58:53 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:54 np0005604375 python3.9[143638]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Feb  1 09:58:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:58:55 np0005604375 python3.9[143790]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  1 09:58:55 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:56 np0005604375 python3[143942]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Feb  1 09:58:57 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:58:59 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:59:01 np0005604375 podman[143957]: 2026-02-01 14:59:01.178306243 +0000 UTC m=+4.346613596 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb  1 09:59:01 np0005604375 podman[144076]: 2026-02-01 14:59:01.276106811 +0000 UTC m=+0.040461838 container create f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  1 09:59:01 np0005604375 podman[144076]: 2026-02-01 14:59:01.252256871 +0000 UTC m=+0.016611948 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb  1 09:59:01 np0005604375 python3[143942]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb  1 09:59:01 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:02 np0005604375 python3.9[144266]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:59:02 np0005604375 python3.9[144420]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:59:03 np0005604375 python3.9[144496]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:59:03 np0005604375 python3.9[144647]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957943.2359455-574-135271856173957/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:59:03 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:04 np0005604375 python3.9[144723]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  1 09:59:04 np0005604375 systemd[1]: Reloading.
Feb  1 09:59:04 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:59:04 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:59:05 np0005604375 python3.9[144834]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:59:05 np0005604375 systemd[1]: Reloading.
Feb  1 09:59:05 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:59:05 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:59:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:59:05 np0005604375 systemd[1]: Starting ovn_controller container...
Feb  1 09:59:05 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:59:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3449622fdff9fe3522a8bb617d602fcbc9463347f45dd946280974b2873978c8/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:05 np0005604375 systemd[1]: Started /usr/bin/podman healthcheck run f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16.
Feb  1 09:59:05 np0005604375 podman[144874]: 2026-02-01 14:59:05.639456417 +0000 UTC m=+0.092387477 container init f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb  1 09:59:05 np0005604375 ovn_controller[144890]: + sudo -E kolla_set_configs
Feb  1 09:59:05 np0005604375 podman[144874]: 2026-02-01 14:59:05.665006505 +0000 UTC m=+0.117937595 container start f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb  1 09:59:05 np0005604375 edpm-start-podman-container[144874]: ovn_controller
Feb  1 09:59:05 np0005604375 systemd[1]: Created slice User Slice of UID 0.
Feb  1 09:59:05 np0005604375 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb  1 09:59:05 np0005604375 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb  1 09:59:05 np0005604375 edpm-start-podman-container[144873]: Creating additional drop-in dependency for "ovn_controller" (f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16)
Feb  1 09:59:05 np0005604375 podman[144897]: 2026-02-01 14:59:05.72177507 +0000 UTC m=+0.053840084 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 09:59:05 np0005604375 systemd[1]: Starting User Manager for UID 0...
Feb  1 09:59:05 np0005604375 systemd[1]: f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16-40aedf39e97a9789.service: Main process exited, code=exited, status=1/FAILURE
Feb  1 09:59:05 np0005604375 systemd[1]: f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16-40aedf39e97a9789.service: Failed with result 'exit-code'.
Feb  1 09:59:05 np0005604375 systemd[1]: Reloading.
Feb  1 09:59:05 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:59:05 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:59:05 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:05 np0005604375 systemd[1]: Started ovn_controller container.
Feb  1 09:59:06 np0005604375 systemd[144931]: Queued start job for default target Main User Target.
Feb  1 09:59:06 np0005604375 systemd[144931]: Created slice User Application Slice.
Feb  1 09:59:06 np0005604375 systemd[144931]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb  1 09:59:06 np0005604375 systemd[144931]: Started Daily Cleanup of User's Temporary Directories.
Feb  1 09:59:06 np0005604375 systemd[144931]: Reached target Paths.
Feb  1 09:59:06 np0005604375 systemd[144931]: Reached target Timers.
Feb  1 09:59:06 np0005604375 systemd[144931]: Starting D-Bus User Message Bus Socket...
Feb  1 09:59:06 np0005604375 systemd[144931]: Starting Create User's Volatile Files and Directories...
Feb  1 09:59:06 np0005604375 systemd[144931]: Finished Create User's Volatile Files and Directories.
Feb  1 09:59:06 np0005604375 systemd[144931]: Listening on D-Bus User Message Bus Socket.
Feb  1 09:59:06 np0005604375 systemd[144931]: Reached target Sockets.
Feb  1 09:59:06 np0005604375 systemd[144931]: Reached target Basic System.
Feb  1 09:59:06 np0005604375 systemd[144931]: Reached target Main User Target.
Feb  1 09:59:06 np0005604375 systemd[144931]: Startup finished in 143ms.
Feb  1 09:59:06 np0005604375 systemd[1]: Started User Manager for UID 0.
Feb  1 09:59:06 np0005604375 systemd[1]: Started Session c1 of User root.
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: INFO:__main__:Validating config file
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: INFO:__main__:Writing out command to execute
Feb  1 09:59:06 np0005604375 systemd[1]: session-c1.scope: Deactivated successfully.
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: ++ cat /run_command
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: + ARGS=
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: + sudo kolla_copy_cacerts
Feb  1 09:59:06 np0005604375 systemd[1]: Started Session c2 of User root.
Feb  1 09:59:06 np0005604375 systemd[1]: session-c2.scope: Deactivated successfully.
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: + [[ ! -n '' ]]
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: + . kolla_extend_start
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: + umask 0022
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Feb  1 09:59:06 np0005604375 NetworkManager[48987]: <info>  [1769957946.2702] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Feb  1 09:59:06 np0005604375 NetworkManager[48987]: <info>  [1769957946.2711] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  1 09:59:06 np0005604375 NetworkManager[48987]: <warn>  [1769957946.2714] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  1 09:59:06 np0005604375 NetworkManager[48987]: <info>  [1769957946.2723] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Feb  1 09:59:06 np0005604375 NetworkManager[48987]: <info>  [1769957946.2730] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Feb  1 09:59:06 np0005604375 NetworkManager[48987]: <info>  [1769957946.2735] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb  1 09:59:06 np0005604375 kernel: br-int: entered promiscuous mode
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00014|main|INFO|OVS feature set changed, force recompute.
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00022|main|INFO|OVS feature set changed, force recompute.
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  1 09:59:06 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:06Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  1 09:59:06 np0005604375 NetworkManager[48987]: <info>  [1769957946.3066] manager: (ovn-492978-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Feb  1 09:59:06 np0005604375 systemd-udevd[145075]: Network interface NamePolicy= disabled on kernel command line.
Feb  1 09:59:06 np0005604375 kernel: genev_sys_6081: entered promiscuous mode
Feb  1 09:59:06 np0005604375 systemd-udevd[145076]: Network interface NamePolicy= disabled on kernel command line.
Feb  1 09:59:06 np0005604375 NetworkManager[48987]: <info>  [1769957946.3225] device (genev_sys_6081): carrier: link connected
Feb  1 09:59:06 np0005604375 NetworkManager[48987]: <info>  [1769957946.3229] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Feb  1 09:59:06 np0005604375 python3.9[145153]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb  1 09:59:07 np0005604375 python3.9[145305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:07 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:08 np0005604375 python3.9[145428]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957947.3504937-619-268364421984899/.source.yaml _original_basename=.tijtmdip follow=False checksum=71f291fd641d85e2615dba61e77205902aaa93d5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:59:09 np0005604375 python3.9[145580]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:59:09 np0005604375 ovs-vsctl[145581]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Feb  1 09:59:09 np0005604375 python3.9[145733]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:59:09 np0005604375 ovs-vsctl[145735]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Feb  1 09:59:09 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.484547) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957950484618, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1712, "num_deletes": 252, "total_data_size": 2490423, "memory_usage": 2539640, "flush_reason": "Manual Compaction"}
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957950490897, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1456056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7300, "largest_seqno": 9011, "table_properties": {"data_size": 1450321, "index_size": 2618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16813, "raw_average_key_size": 21, "raw_value_size": 1436755, "raw_average_value_size": 1795, "num_data_blocks": 123, "num_entries": 800, "num_filter_entries": 800, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957790, "oldest_key_time": 1769957790, "file_creation_time": 1769957950, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 6379 microseconds, and 2687 cpu microseconds.
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.490934) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1456056 bytes OK
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.490948) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.492471) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.492483) EVENT_LOG_v1 {"time_micros": 1769957950492479, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.492499) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2482747, prev total WAL file size 2482747, number of live WAL files 2.
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.492988) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1421KB)], [20(7515KB)]
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957950493035, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9151649, "oldest_snapshot_seqno": -1}
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3387 keys, 7114934 bytes, temperature: kUnknown
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957950520005, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7114934, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7088907, "index_size": 16445, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 80951, "raw_average_key_size": 23, "raw_value_size": 7024341, "raw_average_value_size": 2073, "num_data_blocks": 730, "num_entries": 3387, "num_filter_entries": 3387, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769957950, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.520277) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7114934 bytes
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.521697) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 338.4 rd, 263.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.3 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(11.2) write-amplify(4.9) OK, records in: 3828, records dropped: 441 output_compression: NoCompression
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.521726) EVENT_LOG_v1 {"time_micros": 1769957950521712, "job": 6, "event": "compaction_finished", "compaction_time_micros": 27042, "compaction_time_cpu_micros": 11406, "output_level": 6, "num_output_files": 1, "total_output_size": 7114934, "num_input_records": 3828, "num_output_records": 3387, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957950522027, "job": 6, "event": "table_file_deletion", "file_number": 22}
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957950523171, "job": 6, "event": "table_file_deletion", "file_number": 20}
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.492894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.523264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.523271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.523274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.523277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 09:59:10 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.523280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 09:59:10 np0005604375 python3.9[145888]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 09:59:10 np0005604375 ovs-vsctl[145889]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Feb  1 09:59:11 np0005604375 systemd[1]: session-45.scope: Deactivated successfully.
Feb  1 09:59:11 np0005604375 systemd[1]: session-45.scope: Consumed 48.844s CPU time.
Feb  1 09:59:11 np0005604375 systemd-logind[786]: Session 45 logged out. Waiting for processes to exit.
Feb  1 09:59:11 np0005604375 systemd-logind[786]: Removed session 45.
Feb  1 09:59:11 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:13 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:59:15 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:16 np0005604375 systemd-logind[786]: New session 47 of user zuul.
Feb  1 09:59:16 np0005604375 systemd[1]: Started Session 47 of User zuul.
Feb  1 09:59:16 np0005604375 systemd[1]: Stopping User Manager for UID 0...
Feb  1 09:59:16 np0005604375 systemd[144931]: Activating special unit Exit the Session...
Feb  1 09:59:16 np0005604375 systemd[144931]: Stopped target Main User Target.
Feb  1 09:59:16 np0005604375 systemd[144931]: Stopped target Basic System.
Feb  1 09:59:16 np0005604375 systemd[144931]: Stopped target Paths.
Feb  1 09:59:16 np0005604375 systemd[144931]: Stopped target Sockets.
Feb  1 09:59:16 np0005604375 systemd[144931]: Stopped target Timers.
Feb  1 09:59:16 np0005604375 systemd[144931]: Stopped Daily Cleanup of User's Temporary Directories.
Feb  1 09:59:16 np0005604375 systemd[144931]: Closed D-Bus User Message Bus Socket.
Feb  1 09:59:16 np0005604375 systemd[144931]: Stopped Create User's Volatile Files and Directories.
Feb  1 09:59:16 np0005604375 systemd[144931]: Removed slice User Application Slice.
Feb  1 09:59:16 np0005604375 systemd[144931]: Reached target Shutdown.
Feb  1 09:59:16 np0005604375 systemd[144931]: Finished Exit the Session.
Feb  1 09:59:16 np0005604375 systemd[144931]: Reached target Exit the Session.
Feb  1 09:59:16 np0005604375 systemd[1]: user@0.service: Deactivated successfully.
Feb  1 09:59:16 np0005604375 systemd[1]: Stopped User Manager for UID 0.
Feb  1 09:59:16 np0005604375 systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb  1 09:59:16 np0005604375 systemd[1]: run-user-0.mount: Deactivated successfully.
Feb  1 09:59:16 np0005604375 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb  1 09:59:16 np0005604375 systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb  1 09:59:16 np0005604375 systemd[1]: Removed slice User Slice of UID 0.
Feb  1 09:59:17 np0005604375 python3.9[146070]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:59:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:59:17
Feb  1 09:59:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 09:59:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 09:59:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['vms', 'volumes', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log']
Feb  1 09:59:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 09:59:17 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:18 np0005604375 python3.9[146226]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 09:59:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 09:59:18 np0005604375 python3.9[146378]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:19 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:20 np0005604375 python3.9[146530]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:59:20 np0005604375 python3.9[146693]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:21 np0005604375 python3.9[146845]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:21 np0005604375 python3.9[146995]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 09:59:21 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:22 np0005604375 python3.9[147147]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb  1 09:59:23 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:24 np0005604375 python3.9[147297]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:24 np0005604375 python3.9[147419]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957963.4326386-81-112061173049181/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:25 np0005604375 python3.9[147569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:59:25 np0005604375 python3.9[147690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957964.948161-96-77290954161802/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:25 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:26 np0005604375 python3.9[147842]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 09:59:27 np0005604375 python3.9[147926]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 09:59:27 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 09:59:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 09:59:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:59:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:59:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 09:59:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:59:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 09:59:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:59:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 09:59:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 09:59:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 09:59:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:59:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 09:59:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 09:59:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 09:59:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:59:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 09:59:29 np0005604375 podman[148127]: 2026-02-01 14:59:29.063729488 +0000 UTC m=+0.041688242 container create 7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_snyder, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Feb  1 09:59:29 np0005604375 systemd[1]: Started libpod-conmon-7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664.scope.
Feb  1 09:59:29 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:59:29 np0005604375 podman[148127]: 2026-02-01 14:59:29.128975363 +0000 UTC m=+0.106934217 container init 7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:59:29 np0005604375 podman[148127]: 2026-02-01 14:59:29.134251763 +0000 UTC m=+0.112210537 container start 7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 09:59:29 np0005604375 podman[148127]: 2026-02-01 14:59:29.043973104 +0000 UTC m=+0.021931888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:59:29 np0005604375 focused_snyder[148160]: 167 167
Feb  1 09:59:29 np0005604375 systemd[1]: libpod-7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664.scope: Deactivated successfully.
Feb  1 09:59:29 np0005604375 conmon[148160]: conmon 7185b3a91ccfb212bdfd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664.scope/container/memory.events
Feb  1 09:59:29 np0005604375 podman[148127]: 2026-02-01 14:59:29.139195325 +0000 UTC m=+0.117154129 container attach 7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_snyder, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:59:29 np0005604375 podman[148127]: 2026-02-01 14:59:29.139621727 +0000 UTC m=+0.117580521 container died 7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  1 09:59:29 np0005604375 systemd[1]: var-lib-containers-storage-overlay-514f040d4e13b80901541c1fbd2ac630963e8afd6811d2615b4014e7b53e932c-merged.mount: Deactivated successfully.
Feb  1 09:59:29 np0005604375 podman[148127]: 2026-02-01 14:59:29.182258376 +0000 UTC m=+0.160217130 container remove 7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_snyder, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:59:29 np0005604375 systemd[1]: libpod-conmon-7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664.scope: Deactivated successfully.
Feb  1 09:59:29 np0005604375 podman[148184]: 2026-02-01 14:59:29.316060469 +0000 UTC m=+0.036076112 container create aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_galois, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:59:29 np0005604375 systemd[1]: Started libpod-conmon-aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b.scope.
Feb  1 09:59:29 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:59:29 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3478fe5d6e836fec57fcd2e64bd9398b78a1ec45c91ce7ab69106528bd3d0e06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:29 np0005604375 podman[148184]: 2026-02-01 14:59:29.300804983 +0000 UTC m=+0.020820636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:59:29 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3478fe5d6e836fec57fcd2e64bd9398b78a1ec45c91ce7ab69106528bd3d0e06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:29 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3478fe5d6e836fec57fcd2e64bd9398b78a1ec45c91ce7ab69106528bd3d0e06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:29 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3478fe5d6e836fec57fcd2e64bd9398b78a1ec45c91ce7ab69106528bd3d0e06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:29 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3478fe5d6e836fec57fcd2e64bd9398b78a1ec45c91ce7ab69106528bd3d0e06/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:29 np0005604375 podman[148184]: 2026-02-01 14:59:29.431861319 +0000 UTC m=+0.151877032 container init aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  1 09:59:29 np0005604375 podman[148184]: 2026-02-01 14:59:29.441683079 +0000 UTC m=+0.161698742 container start aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_galois, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  1 09:59:29 np0005604375 podman[148184]: 2026-02-01 14:59:29.445456117 +0000 UTC m=+0.165471800 container attach aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_galois, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:59:29 np0005604375 nifty_galois[148201]: --> passed data devices: 0 physical, 3 LVM
Feb  1 09:59:29 np0005604375 nifty_galois[148201]: --> All data devices are unavailable
Feb  1 09:59:29 np0005604375 systemd[1]: libpod-aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b.scope: Deactivated successfully.
Feb  1 09:59:29 np0005604375 podman[148184]: 2026-02-01 14:59:29.837252364 +0000 UTC m=+0.557268027 container died aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_galois, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  1 09:59:29 np0005604375 systemd[1]: var-lib-containers-storage-overlay-3478fe5d6e836fec57fcd2e64bd9398b78a1ec45c91ce7ab69106528bd3d0e06-merged.mount: Deactivated successfully.
Feb  1 09:59:29 np0005604375 podman[148184]: 2026-02-01 14:59:29.886750779 +0000 UTC m=+0.606766422 container remove aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:59:29 np0005604375 python3.9[148283]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  1 09:59:29 np0005604375 systemd[1]: libpod-conmon-aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b.scope: Deactivated successfully.
Feb  1 09:59:29 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:30 np0005604375 podman[148452]: 2026-02-01 14:59:30.280566273 +0000 UTC m=+0.052771169 container create f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_rosalind, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:59:30 np0005604375 systemd[1]: Started libpod-conmon-f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13.scope.
Feb  1 09:59:30 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:59:30 np0005604375 podman[148452]: 2026-02-01 14:59:30.259331836 +0000 UTC m=+0.031536812 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:59:30 np0005604375 podman[148452]: 2026-02-01 14:59:30.365411678 +0000 UTC m=+0.137616644 container init f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_rosalind, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  1 09:59:30 np0005604375 podman[148452]: 2026-02-01 14:59:30.370536144 +0000 UTC m=+0.142741060 container start f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_rosalind, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 09:59:30 np0005604375 podman[148452]: 2026-02-01 14:59:30.374018444 +0000 UTC m=+0.146223380 container attach f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_rosalind, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 09:59:30 np0005604375 strange_rosalind[148491]: 167 167
Feb  1 09:59:30 np0005604375 systemd[1]: libpod-f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13.scope: Deactivated successfully.
Feb  1 09:59:30 np0005604375 podman[148452]: 2026-02-01 14:59:30.376363721 +0000 UTC m=+0.148568657 container died f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_rosalind, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  1 09:59:30 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f44242f6b7c3c7306fb356afc15fd74d7df65e77e04054904073b31d2fd4c0d1-merged.mount: Deactivated successfully.
Feb  1 09:59:30 np0005604375 podman[148452]: 2026-02-01 14:59:30.420209644 +0000 UTC m=+0.192414560 container remove f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  1 09:59:30 np0005604375 systemd[1]: libpod-conmon-f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13.scope: Deactivated successfully.
Feb  1 09:59:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:59:30 np0005604375 podman[148565]: 2026-02-01 14:59:30.552608958 +0000 UTC m=+0.040196110 container create 5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bardeen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  1 09:59:30 np0005604375 systemd[1]: Started libpod-conmon-5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868.scope.
Feb  1 09:59:30 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:59:30 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48866f3585783b57b9e1d9977b54f207fe45041288dff0a626494c1ebcaad538/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:30 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48866f3585783b57b9e1d9977b54f207fe45041288dff0a626494c1ebcaad538/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:30 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48866f3585783b57b9e1d9977b54f207fe45041288dff0a626494c1ebcaad538/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:30 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48866f3585783b57b9e1d9977b54f207fe45041288dff0a626494c1ebcaad538/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:30 np0005604375 podman[148565]: 2026-02-01 14:59:30.622702451 +0000 UTC m=+0.110289623 container init 5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bardeen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb  1 09:59:30 np0005604375 podman[148565]: 2026-02-01 14:59:30.627687203 +0000 UTC m=+0.115274335 container start 5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:59:30 np0005604375 podman[148565]: 2026-02-01 14:59:30.630684039 +0000 UTC m=+0.118271181 container attach 5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bardeen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:59:30 np0005604375 podman[148565]: 2026-02-01 14:59:30.537121605 +0000 UTC m=+0.024708767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:59:30 np0005604375 python3.9[148559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]: {
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:    "0": [
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:        {
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "devices": [
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "/dev/loop3"
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            ],
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_name": "ceph_lv0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_size": "21470642176",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "name": "ceph_lv0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "tags": {
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.cluster_name": "ceph",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.crush_device_class": "",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.encrypted": "0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.objectstore": "bluestore",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.osd_id": "0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.type": "block",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.vdo": "0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.with_tpm": "0"
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            },
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "type": "block",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "vg_name": "ceph_vg0"
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:        }
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:    ],
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:    "1": [
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:        {
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "devices": [
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "/dev/loop4"
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            ],
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_name": "ceph_lv1",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_size": "21470642176",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "name": "ceph_lv1",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "tags": {
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.cluster_name": "ceph",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.crush_device_class": "",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.encrypted": "0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.objectstore": "bluestore",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.osd_id": "1",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.type": "block",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.vdo": "0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.with_tpm": "0"
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            },
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "type": "block",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "vg_name": "ceph_vg1"
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:        }
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:    ],
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:    "2": [
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:        {
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "devices": [
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "/dev/loop5"
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            ],
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_name": "ceph_lv2",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_size": "21470642176",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "name": "ceph_lv2",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "tags": {
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.cephx_lockbox_secret": "",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.cluster_name": "ceph",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.crush_device_class": "",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.encrypted": "0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.objectstore": "bluestore",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.osd_id": "2",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.type": "block",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.vdo": "0",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:                "ceph.with_tpm": "0"
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            },
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "type": "block",
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:            "vg_name": "ceph_vg2"
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:        }
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]:    ]
Feb  1 09:59:30 np0005604375 boring_bardeen[148582]: }
Feb  1 09:59:30 np0005604375 systemd[1]: libpod-5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868.scope: Deactivated successfully.
Feb  1 09:59:30 np0005604375 podman[148565]: 2026-02-01 14:59:30.903117695 +0000 UTC m=+0.390704817 container died 5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:59:30 np0005604375 systemd[1]: var-lib-containers-storage-overlay-48866f3585783b57b9e1d9977b54f207fe45041288dff0a626494c1ebcaad538-merged.mount: Deactivated successfully.
Feb  1 09:59:30 np0005604375 podman[148565]: 2026-02-01 14:59:30.935920432 +0000 UTC m=+0.423507554 container remove 5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bardeen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:59:30 np0005604375 systemd[1]: libpod-conmon-5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868.scope: Deactivated successfully.
Feb  1 09:59:31 np0005604375 python3.9[148711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957970.1504092-133-82653365186567/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:31 np0005604375 podman[148839]: 2026-02-01 14:59:31.310084094 +0000 UTC m=+0.049529486 container create 70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:59:31 np0005604375 systemd[1]: Started libpod-conmon-70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4.scope.
Feb  1 09:59:31 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:59:31 np0005604375 podman[148839]: 2026-02-01 14:59:31.284134743 +0000 UTC m=+0.023580175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:59:31 np0005604375 podman[148839]: 2026-02-01 14:59:31.381845675 +0000 UTC m=+0.121291127 container init 70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  1 09:59:31 np0005604375 podman[148839]: 2026-02-01 14:59:31.389500514 +0000 UTC m=+0.128945906 container start 70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 09:59:31 np0005604375 podman[148839]: 2026-02-01 14:59:31.39358435 +0000 UTC m=+0.133029722 container attach 70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:59:31 np0005604375 quizzical_gould[148881]: 167 167
Feb  1 09:59:31 np0005604375 systemd[1]: libpod-70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4.scope: Deactivated successfully.
Feb  1 09:59:31 np0005604375 conmon[148881]: conmon 70d63f6c1a193be27daa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4.scope/container/memory.events
Feb  1 09:59:31 np0005604375 podman[148839]: 2026-02-01 14:59:31.398711877 +0000 UTC m=+0.138157239 container died 70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:59:31 np0005604375 systemd[1]: var-lib-containers-storage-overlay-39d68d42f3b51597d7dcf55db2fc269cfb90c6ac72273e0c6d437e88108d8461-merged.mount: Deactivated successfully.
Feb  1 09:59:31 np0005604375 podman[148839]: 2026-02-01 14:59:31.432379009 +0000 UTC m=+0.171824371 container remove 70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 09:59:31 np0005604375 systemd[1]: libpod-conmon-70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4.scope: Deactivated successfully.
Feb  1 09:59:31 np0005604375 podman[148978]: 2026-02-01 14:59:31.621764091 +0000 UTC m=+0.051982516 container create ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 09:59:31 np0005604375 systemd[1]: Started libpod-conmon-ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53.scope.
Feb  1 09:59:31 np0005604375 podman[148978]: 2026-02-01 14:59:31.596607182 +0000 UTC m=+0.026825657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 09:59:31 np0005604375 systemd[1]: Started libcrun container.
Feb  1 09:59:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c330be56737ea24f59d887146ea1f4442d3fb2dd483cae06c6cbfb8721c545/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c330be56737ea24f59d887146ea1f4442d3fb2dd483cae06c6cbfb8721c545/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c330be56737ea24f59d887146ea1f4442d3fb2dd483cae06c6cbfb8721c545/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c330be56737ea24f59d887146ea1f4442d3fb2dd483cae06c6cbfb8721c545/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 09:59:31 np0005604375 podman[148978]: 2026-02-01 14:59:31.724486317 +0000 UTC m=+0.154704792 container init ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_dirac, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  1 09:59:31 np0005604375 python3.9[148972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:31 np0005604375 podman[148978]: 2026-02-01 14:59:31.734688459 +0000 UTC m=+0.164906864 container start ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_dirac, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  1 09:59:31 np0005604375 podman[148978]: 2026-02-01 14:59:31.73822963 +0000 UTC m=+0.168448055 container attach ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_dirac, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 09:59:31 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:32 np0005604375 python3.9[149130]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957971.2335758-133-60088184149796/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:32 np0005604375 lvm[149218]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 09:59:32 np0005604375 lvm[149218]: VG ceph_vg0 finished
Feb  1 09:59:32 np0005604375 lvm[149219]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 09:59:32 np0005604375 lvm[149219]: VG ceph_vg1 finished
Feb  1 09:59:32 np0005604375 lvm[149221]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 09:59:32 np0005604375 lvm[149221]: VG ceph_vg2 finished
Feb  1 09:59:32 np0005604375 sad_dirac[148995]: {}
Feb  1 09:59:32 np0005604375 systemd[1]: libpod-ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53.scope: Deactivated successfully.
Feb  1 09:59:32 np0005604375 systemd[1]: libpod-ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53.scope: Consumed 1.062s CPU time.
Feb  1 09:59:32 np0005604375 podman[148978]: 2026-02-01 14:59:32.525228381 +0000 UTC m=+0.955446816 container died ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_dirac, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 09:59:32 np0005604375 systemd[1]: var-lib-containers-storage-overlay-29c330be56737ea24f59d887146ea1f4442d3fb2dd483cae06c6cbfb8721c545-merged.mount: Deactivated successfully.
Feb  1 09:59:32 np0005604375 podman[148978]: 2026-02-01 14:59:32.579194373 +0000 UTC m=+1.009412798 container remove ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_dirac, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  1 09:59:32 np0005604375 systemd[1]: libpod-conmon-ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53.scope: Deactivated successfully.
Feb  1 09:59:32 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 09:59:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:59:32 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 09:59:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:59:33 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:59:33 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 09:59:33 np0005604375 python3.9[149388]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:33 np0005604375 python3.9[149509]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957972.9121368-177-116280454783839/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:33 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:34 np0005604375 python3.9[149659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:34 np0005604375 python3.9[149780]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957973.9887934-177-222771691261912/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:59:35 np0005604375 python3.9[149930]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 09:59:35 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:35 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:35Z|00025|memory|INFO|16896 kB peak resident set size after 29.7 seconds
Feb  1 09:59:35 np0005604375 ovn_controller[144890]: 2026-02-01T14:59:35Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Feb  1 09:59:36 np0005604375 podman[150009]: 2026-02-01 14:59:36.027502917 +0000 UTC m=+0.109148760 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Feb  1 09:59:36 np0005604375 python3.9[150112]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:37 np0005604375 python3.9[150264]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:37 np0005604375 python3.9[150342]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:37 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:38 np0005604375 python3.9[150494]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:38 np0005604375 python3.9[150572]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:39 np0005604375 python3.9[150724]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:59:39 np0005604375 python3.9[150876]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:39 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:40 np0005604375 python3.9[150954]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:59:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:59:40 np0005604375 python3.9[151106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:41 np0005604375 python3.9[151184]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:59:41 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:41 np0005604375 python3.9[151336]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:59:41 np0005604375 systemd[1]: Reloading.
Feb  1 09:59:42 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:59:42 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:59:42 np0005604375 python3.9[151526]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:43 np0005604375 python3.9[151604]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:59:43 np0005604375 python3.9[151756]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:43 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:44 np0005604375 python3.9[151834]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:59:45 np0005604375 python3.9[151986]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 09:59:45 np0005604375 systemd[1]: Reloading.
Feb  1 09:59:45 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 09:59:45 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 09:59:45 np0005604375 systemd[1]: Starting Create netns directory...
Feb  1 09:59:45 np0005604375 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  1 09:59:45 np0005604375 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  1 09:59:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:59:45 np0005604375 systemd[1]: Finished Create netns directory.
Feb  1 09:59:45 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:46 np0005604375 python3.9[152180]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:47 np0005604375 python3.9[152332]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:47 np0005604375 python3.9[152455]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957986.5606518-328-249842516700950/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:47 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:59:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:59:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:59:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:59:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 09:59:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 09:59:48 np0005604375 python3.9[152607]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:59:49 np0005604375 python3.9[152759]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 09:59:49 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:50 np0005604375 python3.9[152911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 09:59:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:59:50 np0005604375 python3.9[153034]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957989.555074-361-206525969494613/.source.json _original_basename=.yhu2ina2 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:59:51 np0005604375 python3.9[153184]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 09:59:51 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:53 np0005604375 python3.9[153607]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Feb  1 09:59:53 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:54 np0005604375 python3.9[153759]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  1 09:59:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 09:59:55 np0005604375 python3[153911]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb  1 09:59:55 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:57 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 09:59:59 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:00 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:00:00 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2097 writes, 9242 keys, 2097 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2097 writes, 2097 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2097 writes, 9242 keys, 2097 commit groups, 1.0 writes per commit group, ingest: 12.29 MB, 0.02 MB/s#012Interval WAL: 2097 writes, 2097 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    160.3      0.05              0.02         3    0.018       0      0       0.0       0.0#012  L6      1/0    6.79 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    236.7    207.5      0.07              0.03         2    0.034    7145    730       0.0       0.0#012 Sum      1/0    6.79 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    131.2    186.5      0.12              0.06         5    0.025    7145    730       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    137.0    194.2      0.12              0.06         4    0.029    7145    730       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    236.7    207.5      0.07              0.03         2    0.034    7145    730       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    176.1      0.05              0.02         2    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.009, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5635c5d4b8d0#2 capacity: 308.00 MB usage: 636.55 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(38,545.27 KB,0.172885%) FilterBlock(6,27.86 KB,0.00883325%) IndexBlock(6,63.42 KB,0.0201089%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  1 10:00:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:00:01 np0005604375 podman[153925]: 2026-02-01 15:00:01.666804817 +0000 UTC m=+6.053836018 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  1 10:00:01 np0005604375 podman[154075]: 2026-02-01 15:00:01.792722575 +0000 UTC m=+0.053792868 container create 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb  1 10:00:01 np0005604375 podman[154075]: 2026-02-01 15:00:01.768798481 +0000 UTC m=+0.029868744 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  1 10:00:01 np0005604375 python3[153911]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  1 10:00:01 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:02 np0005604375 python3.9[154265]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:00:03 np0005604375 python3.9[154419]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:03 np0005604375 python3.9[154495]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:00:03 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:04 np0005604375 python3.9[154646]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769958003.6657736-439-78844630350748/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:04 np0005604375 python3.9[154722]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  1 10:00:04 np0005604375 systemd[1]: Reloading.
Feb  1 10:00:04 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:00:04 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:00:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:00:05 np0005604375 python3.9[154832]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:00:05 np0005604375 systemd[1]: Reloading.
Feb  1 10:00:05 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:00:05 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:00:05 np0005604375 systemd[1]: Starting ovn_metadata_agent container...
Feb  1 10:00:05 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:06 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:00:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6a638621d1807aa58f3c5aaf543bfcc60f34f23a3c0997ac8a2414e38b0938/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6a638621d1807aa58f3c5aaf543bfcc60f34f23a3c0997ac8a2414e38b0938/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:06 np0005604375 systemd[1]: Started /usr/bin/podman healthcheck run 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815.
Feb  1 10:00:06 np0005604375 podman[154874]: 2026-02-01 15:00:06.055654442 +0000 UTC m=+0.138240082 container init 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: + sudo -E kolla_set_configs
Feb  1 10:00:06 np0005604375 podman[154874]: 2026-02-01 15:00:06.081560182 +0000 UTC m=+0.164145832 container start 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb  1 10:00:06 np0005604375 edpm-start-podman-container[154874]: ovn_metadata_agent
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Validating config file
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Copying service configuration files
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Writing out command to execute
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: ++ cat /run_command
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: + CMD=neutron-ovn-metadata-agent
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: + ARGS=
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: + sudo kolla_copy_cacerts
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: + [[ ! -n '' ]]
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: + . kolla_extend_start
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: Running command: 'neutron-ovn-metadata-agent'
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: + umask 0022
Feb  1 10:00:06 np0005604375 ovn_metadata_agent[154890]: + exec neutron-ovn-metadata-agent
Feb  1 10:00:06 np0005604375 edpm-start-podman-container[154873]: Creating additional drop-in dependency for "ovn_metadata_agent" (1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815)
Feb  1 10:00:06 np0005604375 podman[154894]: 2026-02-01 15:00:06.177389731 +0000 UTC m=+0.120763312 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  1 10:00:06 np0005604375 podman[154906]: 2026-02-01 15:00:06.177270657 +0000 UTC m=+0.088719976 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb  1 10:00:06 np0005604375 systemd[1]: Reloading.
Feb  1 10:00:06 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:00:06 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:00:06 np0005604375 systemd[1]: Started ovn_metadata_agent container.
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.011024) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958007011053, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 686, "num_deletes": 251, "total_data_size": 854934, "memory_usage": 866936, "flush_reason": "Manual Compaction"}
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958007016259, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 847505, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9012, "largest_seqno": 9697, "table_properties": {"data_size": 843899, "index_size": 1450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7816, "raw_average_key_size": 18, "raw_value_size": 836709, "raw_average_value_size": 1982, "num_data_blocks": 67, "num_entries": 422, "num_filter_entries": 422, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957951, "oldest_key_time": 1769957951, "file_creation_time": 1769958007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5259 microseconds, and 1697 cpu microseconds.
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.016285) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 847505 bytes OK
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.016315) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.017338) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.017351) EVENT_LOG_v1 {"time_micros": 1769958007017347, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.017363) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 851345, prev total WAL file size 851345, number of live WAL files 2.
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.017595) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(827KB)], [23(6948KB)]
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958007017659, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7962439, "oldest_snapshot_seqno": -1}
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3295 keys, 6147633 bytes, temperature: kUnknown
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958007040538, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6147633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6123704, "index_size": 14604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79842, "raw_average_key_size": 24, "raw_value_size": 6062162, "raw_average_value_size": 1839, "num_data_blocks": 638, "num_entries": 3295, "num_filter_entries": 3295, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.040705) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6147633 bytes
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.041891) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 347.2 rd, 268.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.8 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(16.6) write-amplify(7.3) OK, records in: 3809, records dropped: 514 output_compression: NoCompression
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.041907) EVENT_LOG_v1 {"time_micros": 1769958007041900, "job": 8, "event": "compaction_finished", "compaction_time_micros": 22935, "compaction_time_cpu_micros": 9087, "output_level": 6, "num_output_files": 1, "total_output_size": 6147633, "num_input_records": 3809, "num_output_records": 3295, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958007042110, "job": 8, "event": "table_file_deletion", "file_number": 25}
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958007042647, "job": 8, "event": "table_file_deletion", "file_number": 23}
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.017558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.042720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.042724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.042726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.042728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:00:07 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.042730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:00:07 np0005604375 python3.9[155150]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.759 154901 INFO neutron.common.config [-] Logging enabled!#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.759 154901 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.759 154901 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.791 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.791 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.842 154901 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.842 154901 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.842 154901 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.842 154901 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.842 154901 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.854 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c3bd6005-873a-4620-bb39-624ed33e90e2 (UUID: c3bd6005-873a-4620-bb39-624ed33e90e2) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.884 154901 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.884 154901 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.884 154901 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.885 154901 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.887 154901 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.893 154901 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.899 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c3bd6005-873a-4620-bb39-624ed33e90e2'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fb6bbf84820>], external_ids={}, name=c3bd6005-873a-4620-bb39-624ed33e90e2, nb_cfg_timestamp=1769957954302, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.900 154901 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fb6bbf84fd0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.901 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.901 154901 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.902 154901 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.902 154901 INFO oslo_service.service [-] Starting 1 workers#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.905 154901 DEBUG oslo_service.service [-] Started child 155182 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.908 155182 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-8290261'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.908 154901 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp6yvx35yo/privsep.sock']#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.926 155182 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.927 155182 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.927 155182 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  1 10:00:07 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.930 155182 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.935 155182 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb  1 10:00:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.941 155182 INFO eventlet.wsgi.server [-] (155182) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Feb  1 10:00:08 np0005604375 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Feb  1 10:00:08 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.499 154901 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb  1 10:00:08 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.500 154901 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp6yvx35yo/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb  1 10:00:08 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.414 155315 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb  1 10:00:08 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.419 155315 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb  1 10:00:08 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.422 155315 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Feb  1 10:00:08 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.423 155315 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155315#033[00m
Feb  1 10:00:08 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.503 155315 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffbecc0-9c75-4272-8029-3823b1d72e8a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  1 10:00:08 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.908 155315 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:00:08 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.908 155315 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:00:08 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.909 155315 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:00:09 np0005604375 python3.9[155314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.359 155315 DEBUG oslo.privsep.daemon [-] privsep: reply[d7eb1d28-7468-44a1-9cb1-4813a8fde834]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.362 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, column=external_ids, values=({'neutron:ovn-metadata-id': 'a7cfbf75-618c-52b8-b548-605f3c91bcbe'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.371 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.377 154901 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.378 154901 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.378 154901 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.378 154901 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.378 154901 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.378 154901 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.378 154901 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.399 154901 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.399 154901 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.399 154901 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.399 154901 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.399 154901 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.399 154901 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.412 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.412 154901 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.412 154901 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.412 154901 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.412 154901 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:00:09 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.412 154901 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  1 10:00:09 np0005604375 python3.9[155444]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958007.929342-484-220338939120569/.source.yaml _original_basename=.43efzx1r follow=False checksum=85d5f776cfd8fbfbcc86699b9b1dc89afe8e4b0a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:09 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:09 np0005604375 systemd-logind[786]: Session 47 logged out. Waiting for processes to exit.
Feb  1 10:00:09 np0005604375 systemd[1]: session-47.scope: Deactivated successfully.
Feb  1 10:00:09 np0005604375 systemd[1]: session-47.scope: Consumed 47.335s CPU time.
Feb  1 10:00:10 np0005604375 systemd-logind[786]: Removed session 47.
Feb  1 10:00:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:00:11 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:13 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:15 np0005604375 systemd-logind[786]: New session 48 of user zuul.
Feb  1 10:00:15 np0005604375 systemd[1]: Started Session 48 of User zuul.
Feb  1 10:00:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:00:15 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:16 np0005604375 python3.9[155622]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 10:00:17 np0005604375 python3.9[155778]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:00:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:00:17
Feb  1 10:00:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:00:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:00:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'backups', 'vms', 'default.rgw.meta', 'images', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Feb  1 10:00:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:00:17 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:18 np0005604375 python3.9[155943]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  1 10:00:18 np0005604375 systemd[1]: Reloading.
Feb  1 10:00:18 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:00:18 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:00:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:00:19 np0005604375 python3.9[156128]: ansible-ansible.builtin.service_facts Invoked
Feb  1 10:00:19 np0005604375 network[156145]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  1 10:00:19 np0005604375 network[156146]: 'network-scripts' will be removed from distribution in near future.
Feb  1 10:00:19 np0005604375 network[156147]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  1 10:00:19 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:00:21 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:23 np0005604375 python3.9[156409]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:00:23 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:24 np0005604375 python3.9[156562]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:00:25 np0005604375 python3.9[156715]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:00:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:00:25 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:26 np0005604375 python3.9[156868]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:00:27 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:27 np0005604375 python3.9[157021]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:00:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:00:28 np0005604375 python3.9[157174]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:00:29 np0005604375 python3.9[157327]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:00:29 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:00:30 np0005604375 python3.9[157480]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:31 np0005604375 python3.9[157632]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:31 np0005604375 python3.9[157784]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:31 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:32 np0005604375 python3.9[157936]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:32 np0005604375 python3.9[158133]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:00:33 np0005604375 python3.9[158361]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:00:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:00:33 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:34 np0005604375 podman[158608]: 2026-02-01 15:00:34.039624067 +0000 UTC m=+0.054467301 container create 116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nash, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:00:34 np0005604375 python3.9[158594]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:34 np0005604375 systemd[1]: Started libpod-conmon-116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b.scope.
Feb  1 10:00:34 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:00:34 np0005604375 podman[158608]: 2026-02-01 15:00:34.014769349 +0000 UTC m=+0.029612593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:00:34 np0005604375 podman[158608]: 2026-02-01 15:00:34.118902592 +0000 UTC m=+0.133745836 container init 116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  1 10:00:34 np0005604375 podman[158608]: 2026-02-01 15:00:34.125521778 +0000 UTC m=+0.140365012 container start 116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nash, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:00:34 np0005604375 podman[158608]: 2026-02-01 15:00:34.131810195 +0000 UTC m=+0.146653399 container attach 116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nash, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:00:34 np0005604375 xenodochial_nash[158625]: 167 167
Feb  1 10:00:34 np0005604375 systemd[1]: libpod-116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b.scope: Deactivated successfully.
Feb  1 10:00:34 np0005604375 podman[158608]: 2026-02-01 15:00:34.141470656 +0000 UTC m=+0.156313850 container died 116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nash, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:00:34 np0005604375 systemd[1]: var-lib-containers-storage-overlay-6f910c801279fa53bf3013b5843ac74f5d5d739b9bf86e77d354aa8ba8e19ad3-merged.mount: Deactivated successfully.
Feb  1 10:00:34 np0005604375 podman[158608]: 2026-02-01 15:00:34.184838064 +0000 UTC m=+0.199681268 container remove 116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nash, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:00:34 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:00:34 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:00:34 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:00:34 np0005604375 systemd[1]: libpod-conmon-116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b.scope: Deactivated successfully.
Feb  1 10:00:34 np0005604375 podman[158706]: 2026-02-01 15:00:34.353737406 +0000 UTC m=+0.045890550 container create c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  1 10:00:34 np0005604375 systemd[1]: Started libpod-conmon-c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e.scope.
Feb  1 10:00:34 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:00:34 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2420a2902fb82e5215d3fc0505e8c3a27297ed2f04ec9cb6f439afb2f888448b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:34 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2420a2902fb82e5215d3fc0505e8c3a27297ed2f04ec9cb6f439afb2f888448b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:34 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2420a2902fb82e5215d3fc0505e8c3a27297ed2f04ec9cb6f439afb2f888448b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:34 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2420a2902fb82e5215d3fc0505e8c3a27297ed2f04ec9cb6f439afb2f888448b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:34 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2420a2902fb82e5215d3fc0505e8c3a27297ed2f04ec9cb6f439afb2f888448b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:34 np0005604375 podman[158706]: 2026-02-01 15:00:34.334516926 +0000 UTC m=+0.026670050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:00:34 np0005604375 podman[158706]: 2026-02-01 15:00:34.444553885 +0000 UTC m=+0.136706999 container init c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jemison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  1 10:00:34 np0005604375 podman[158706]: 2026-02-01 15:00:34.452511059 +0000 UTC m=+0.144664163 container start c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jemison, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  1 10:00:34 np0005604375 podman[158706]: 2026-02-01 15:00:34.457156949 +0000 UTC m=+0.149310053 container attach c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  1 10:00:34 np0005604375 python3.9[158819]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:34 np0005604375 relaxed_jemison[158762]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:00:34 np0005604375 relaxed_jemison[158762]: --> All data devices are unavailable
Feb  1 10:00:34 np0005604375 systemd[1]: libpod-c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e.scope: Deactivated successfully.
Feb  1 10:00:34 np0005604375 podman[158706]: 2026-02-01 15:00:34.954001108 +0000 UTC m=+0.646154222 container died c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  1 10:00:34 np0005604375 systemd[1]: var-lib-containers-storage-overlay-2420a2902fb82e5215d3fc0505e8c3a27297ed2f04ec9cb6f439afb2f888448b-merged.mount: Deactivated successfully.
Feb  1 10:00:34 np0005604375 podman[158706]: 2026-02-01 15:00:34.997700995 +0000 UTC m=+0.689854109 container remove c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  1 10:00:35 np0005604375 systemd[1]: libpod-conmon-c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e.scope: Deactivated successfully.
Feb  1 10:00:35 np0005604375 python3.9[159025]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:35 np0005604375 podman[159068]: 2026-02-01 15:00:35.373717172 +0000 UTC m=+0.039809348 container create 9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:00:35 np0005604375 systemd[1]: Started libpod-conmon-9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a.scope.
Feb  1 10:00:35 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:00:35 np0005604375 podman[159068]: 2026-02-01 15:00:35.427545094 +0000 UTC m=+0.093637340 container init 9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  1 10:00:35 np0005604375 podman[159068]: 2026-02-01 15:00:35.431997409 +0000 UTC m=+0.098089565 container start 9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  1 10:00:35 np0005604375 vibrant_hopper[159124]: 167 167
Feb  1 10:00:35 np0005604375 systemd[1]: libpod-9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a.scope: Deactivated successfully.
Feb  1 10:00:35 np0005604375 conmon[159124]: conmon 9c2f7ffa0892e83ac07b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a.scope/container/memory.events
Feb  1 10:00:35 np0005604375 podman[159068]: 2026-02-01 15:00:35.43525596 +0000 UTC m=+0.101348206 container attach 9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  1 10:00:35 np0005604375 podman[159068]: 2026-02-01 15:00:35.435628881 +0000 UTC m=+0.101721077 container died 9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:00:35 np0005604375 podman[159068]: 2026-02-01 15:00:35.353274128 +0000 UTC m=+0.019366364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:00:35 np0005604375 systemd[1]: var-lib-containers-storage-overlay-24922403da10291fd441336338c607e23e2319b4062d3cbc9f91338bc22f3430-merged.mount: Deactivated successfully.
Feb  1 10:00:35 np0005604375 podman[159068]: 2026-02-01 15:00:35.471188219 +0000 UTC m=+0.137280405 container remove 9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 10:00:35 np0005604375 systemd[1]: libpod-conmon-9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a.scope: Deactivated successfully.
Feb  1 10:00:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:00:35 np0005604375 podman[159200]: 2026-02-01 15:00:35.599957184 +0000 UTC m=+0.039017836 container create 49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Feb  1 10:00:35 np0005604375 systemd[1]: Started libpod-conmon-49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca.scope.
Feb  1 10:00:35 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:00:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e9d569f25c8dd265cd84844812fbd349e299b529af9dc43769c6f7fba370b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e9d569f25c8dd265cd84844812fbd349e299b529af9dc43769c6f7fba370b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e9d569f25c8dd265cd84844812fbd349e299b529af9dc43769c6f7fba370b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e9d569f25c8dd265cd84844812fbd349e299b529af9dc43769c6f7fba370b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:35 np0005604375 podman[159200]: 2026-02-01 15:00:35.681278726 +0000 UTC m=+0.120339418 container init 49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_cori, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:00:35 np0005604375 podman[159200]: 2026-02-01 15:00:35.586014833 +0000 UTC m=+0.025075495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:00:35 np0005604375 podman[159200]: 2026-02-01 15:00:35.686240456 +0000 UTC m=+0.125301108 container start 49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_cori, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:00:35 np0005604375 podman[159200]: 2026-02-01 15:00:35.689346073 +0000 UTC m=+0.128406735 container attach 49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  1 10:00:35 np0005604375 python3.9[159273]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:35 np0005604375 amazing_cori[159240]: {
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:    "0": [
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:        {
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "devices": [
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "/dev/loop3"
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            ],
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_name": "ceph_lv0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_size": "21470642176",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "name": "ceph_lv0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "tags": {
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.cluster_name": "ceph",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.crush_device_class": "",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.encrypted": "0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.objectstore": "bluestore",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.osd_id": "0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.type": "block",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.vdo": "0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.with_tpm": "0"
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            },
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "type": "block",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "vg_name": "ceph_vg0"
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:        }
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:    ],
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:    "1": [
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:        {
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "devices": [
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "/dev/loop4"
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            ],
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_name": "ceph_lv1",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_size": "21470642176",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "name": "ceph_lv1",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "tags": {
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.cluster_name": "ceph",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.crush_device_class": "",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.encrypted": "0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.objectstore": "bluestore",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.osd_id": "1",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.type": "block",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.vdo": "0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.with_tpm": "0"
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            },
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "type": "block",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "vg_name": "ceph_vg1"
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:        }
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:    ],
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:    "2": [
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:        {
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "devices": [
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "/dev/loop5"
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            ],
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_name": "ceph_lv2",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_size": "21470642176",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "name": "ceph_lv2",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "tags": {
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.cluster_name": "ceph",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.crush_device_class": "",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.encrypted": "0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.objectstore": "bluestore",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.osd_id": "2",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.type": "block",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.vdo": "0",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:                "ceph.with_tpm": "0"
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            },
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "type": "block",
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:            "vg_name": "ceph_vg2"
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:        }
Feb  1 10:00:35 np0005604375 amazing_cori[159240]:    ]
Feb  1 10:00:35 np0005604375 amazing_cori[159240]: }
Feb  1 10:00:35 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:35 np0005604375 systemd[1]: libpod-49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca.scope: Deactivated successfully.
Feb  1 10:00:35 np0005604375 podman[159200]: 2026-02-01 15:00:35.952046729 +0000 UTC m=+0.391107411 container died 49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:00:35 np0005604375 systemd[1]: var-lib-containers-storage-overlay-b6e9d569f25c8dd265cd84844812fbd349e299b529af9dc43769c6f7fba370b6-merged.mount: Deactivated successfully.
Feb  1 10:00:36 np0005604375 podman[159200]: 2026-02-01 15:00:36.00197792 +0000 UTC m=+0.441038602 container remove 49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  1 10:00:36 np0005604375 systemd[1]: libpod-conmon-49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca.scope: Deactivated successfully.
Feb  1 10:00:36 np0005604375 podman[159463]: 2026-02-01 15:00:36.344152057 +0000 UTC m=+0.073665949 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  1 10:00:36 np0005604375 podman[159464]: 2026-02-01 15:00:36.385799816 +0000 UTC m=+0.114900706 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  1 10:00:36 np0005604375 podman[159546]: 2026-02-01 15:00:36.452150679 +0000 UTC m=+0.038308486 container create 5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:00:36 np0005604375 systemd[1]: Started libpod-conmon-5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb.scope.
Feb  1 10:00:36 np0005604375 python3.9[159520]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:36 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:00:36 np0005604375 podman[159546]: 2026-02-01 15:00:36.519420338 +0000 UTC m=+0.105578145 container init 5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:00:36 np0005604375 podman[159546]: 2026-02-01 15:00:36.525261222 +0000 UTC m=+0.111419039 container start 5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  1 10:00:36 np0005604375 mystifying_blackwell[159563]: 167 167
Feb  1 10:00:36 np0005604375 systemd[1]: libpod-5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb.scope: Deactivated successfully.
Feb  1 10:00:36 np0005604375 podman[159546]: 2026-02-01 15:00:36.529741518 +0000 UTC m=+0.115899345 container attach 5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:00:36 np0005604375 podman[159546]: 2026-02-01 15:00:36.530503669 +0000 UTC m=+0.116661476 container died 5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  1 10:00:36 np0005604375 podman[159546]: 2026-02-01 15:00:36.436260693 +0000 UTC m=+0.022418520 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:00:36 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f0b3dfdb02bfc409994dba55f5ab5bf0a5ac0fec53ab646260ff2c851b882fd3-merged.mount: Deactivated successfully.
Feb  1 10:00:36 np0005604375 podman[159546]: 2026-02-01 15:00:36.562180538 +0000 UTC m=+0.148338365 container remove 5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  1 10:00:36 np0005604375 systemd[1]: libpod-conmon-5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb.scope: Deactivated successfully.
Feb  1 10:00:36 np0005604375 podman[159639]: 2026-02-01 15:00:36.700487772 +0000 UTC m=+0.050461128 container create d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  1 10:00:36 np0005604375 systemd[1]: Started libpod-conmon-d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645.scope.
Feb  1 10:00:36 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:00:36 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056cfdbd7ab0a3182be2157c6e5de3883c3543af0dc222675a18c4004cbad8b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:36 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056cfdbd7ab0a3182be2157c6e5de3883c3543af0dc222675a18c4004cbad8b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:36 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056cfdbd7ab0a3182be2157c6e5de3883c3543af0dc222675a18c4004cbad8b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:36 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056cfdbd7ab0a3182be2157c6e5de3883c3543af0dc222675a18c4004cbad8b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:00:36 np0005604375 podman[159639]: 2026-02-01 15:00:36.680974874 +0000 UTC m=+0.030948220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:00:36 np0005604375 podman[159639]: 2026-02-01 15:00:36.789693666 +0000 UTC m=+0.139667012 container init d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  1 10:00:36 np0005604375 podman[159639]: 2026-02-01 15:00:36.796684152 +0000 UTC m=+0.146657478 container start d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brahmagupta, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:00:36 np0005604375 podman[159639]: 2026-02-01 15:00:36.800247152 +0000 UTC m=+0.150220498 container attach d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brahmagupta, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:00:37 np0005604375 python3.9[159759]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:37 np0005604375 lvm[159984]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:00:37 np0005604375 lvm[159984]: VG ceph_vg0 finished
Feb  1 10:00:37 np0005604375 lvm[159987]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:00:37 np0005604375 lvm[159987]: VG ceph_vg1 finished
Feb  1 10:00:37 np0005604375 lvm[159988]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:00:37 np0005604375 lvm[159988]: VG ceph_vg0 finished
Feb  1 10:00:37 np0005604375 lvm[159990]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:00:37 np0005604375 lvm[159990]: VG ceph_vg2 finished
Feb  1 10:00:37 np0005604375 python3.9[159981]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:37 np0005604375 bold_brahmagupta[159702]: {}
Feb  1 10:00:37 np0005604375 systemd[1]: libpod-d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645.scope: Deactivated successfully.
Feb  1 10:00:37 np0005604375 systemd[1]: libpod-d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645.scope: Consumed 1.062s CPU time.
Feb  1 10:00:37 np0005604375 podman[159639]: 2026-02-01 15:00:37.620041809 +0000 UTC m=+0.970015175 container died d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brahmagupta, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:00:37 np0005604375 systemd[1]: var-lib-containers-storage-overlay-056cfdbd7ab0a3182be2157c6e5de3883c3543af0dc222675a18c4004cbad8b4-merged.mount: Deactivated successfully.
Feb  1 10:00:37 np0005604375 podman[159639]: 2026-02-01 15:00:37.699734336 +0000 UTC m=+1.049707652 container remove d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  1 10:00:37 np0005604375 systemd[1]: libpod-conmon-d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645.scope: Deactivated successfully.
Feb  1 10:00:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:00:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:00:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:00:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:00:37 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:38 np0005604375 python3.9[160183]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:00:38 np0005604375 python3.9[160335]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:00:38 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:00:38 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:00:39 np0005604375 python3.9[160487]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  1 10:00:39 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:40 np0005604375 python3.9[160639]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  1 10:00:40 np0005604375 systemd[1]: Reloading.
Feb  1 10:00:40 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:00:40 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:00:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:00:41 np0005604375 python3.9[160827]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:00:41 np0005604375 python3.9[160980]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:00:41 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:42 np0005604375 python3.9[161133]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:00:43 np0005604375 python3.9[161286]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:00:43 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:44 np0005604375 python3.9[161439]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:00:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:00:45 np0005604375 python3.9[161592]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:00:45 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:46 np0005604375 python3.9[161745]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:00:47 np0005604375 python3.9[161898]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Feb  1 10:00:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:00:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:00:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:00:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:00:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:00:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:00:49 np0005604375 python3.9[162051]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  1 10:00:49 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:50 np0005604375 python3.9[162209]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  1 10:00:50 np0005604375 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  1 10:00:50 np0005604375 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  1 10:00:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:00:51 np0005604375 python3.9[162370]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 10:00:51 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:52 np0005604375 python3.9[162454]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 10:00:53 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:00:55 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:57 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:00:59 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:01:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:01:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 5615 writes, 24K keys, 5615 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5615 writes, 888 syncs, 6.32 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5615 writes, 24K keys, 5615 commit groups, 1.0 writes per commit group, ingest: 18.67 MB, 0.03 MB/s#012Interval WAL: 5615 writes, 888 syncs, 6.32 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb  1 10:01:01 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:03 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:01:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6923 writes, 28K keys, 6923 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6923 writes, 1318 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6923 writes, 28K keys, 6923 commit groups, 1.0 writes per commit group, ingest: 19.77 MB, 0.03 MB/s#012Interval WAL: 6923 writes, 1318 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Feb  1 10:01:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:01:05 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:07 np0005604375 podman[162660]: 2026-02-01 15:01:06.99968666 +0000 UTC m=+0.078213357 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  1 10:01:07 np0005604375 podman[162661]: 2026-02-01 15:01:07.035160376 +0000 UTC m=+0.113787296 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  1 10:01:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:01:07.792 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:01:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:01:07.793 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:01:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:01:07.793 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:01:07 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:01:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5503 writes, 23K keys, 5503 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5503 writes, 810 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5503 writes, 23K keys, 5503 commit groups, 1.0 writes per commit group, ingest: 18.44 MB, 0.03 MB/s#012Interval WAL: 5503 writes, 810 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb  1 10:01:09 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:01:11 np0005604375 ceph-mgr[75469]: [devicehealth INFO root] Check health
Feb  1 10:01:11 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:13 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:14 np0005604375 kernel: SELinux:  Converting 2777 SID table entries...
Feb  1 10:01:14 np0005604375 kernel: SELinux:  policy capability network_peer_controls=1
Feb  1 10:01:14 np0005604375 kernel: SELinux:  policy capability open_perms=1
Feb  1 10:01:14 np0005604375 kernel: SELinux:  policy capability extended_socket_class=1
Feb  1 10:01:14 np0005604375 kernel: SELinux:  policy capability always_check_network=0
Feb  1 10:01:14 np0005604375 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  1 10:01:14 np0005604375 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  1 10:01:14 np0005604375 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  1 10:01:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:01:15 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:01:17
Feb  1 10:01:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:01:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:01:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root']
Feb  1 10:01:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:01:17 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:01:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:01:19 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:01:21 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:23 np0005604375 kernel: SELinux:  Converting 2777 SID table entries...
Feb  1 10:01:23 np0005604375 kernel: SELinux:  policy capability network_peer_controls=1
Feb  1 10:01:23 np0005604375 kernel: SELinux:  policy capability open_perms=1
Feb  1 10:01:23 np0005604375 kernel: SELinux:  policy capability extended_socket_class=1
Feb  1 10:01:23 np0005604375 kernel: SELinux:  policy capability always_check_network=0
Feb  1 10:01:23 np0005604375 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  1 10:01:23 np0005604375 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  1 10:01:23 np0005604375 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  1 10:01:23 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:01:25 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:27 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:01:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:01:29 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:01:31 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:33 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:01:35 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:37 np0005604375 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Feb  1 10:01:37 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:37 np0005604375 podman[166150]: 2026-02-01 15:01:37.972074067 +0000 UTC m=+0.090669304 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb  1 10:01:37 np0005604375 podman[166162]: 2026-02-01 15:01:37.996100511 +0000 UTC m=+0.114908054 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:01:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:01:38 np0005604375 podman[167179]: 2026-02-01 15:01:38.924505424 +0000 UTC m=+0.093446521 container create 6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_neumann, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 10:01:38 np0005604375 podman[167179]: 2026-02-01 15:01:38.851338002 +0000 UTC m=+0.020279129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:01:38 np0005604375 systemd[1]: Started libpod-conmon-6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212.scope.
Feb  1 10:01:39 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:01:39 np0005604375 podman[167179]: 2026-02-01 15:01:39.024456187 +0000 UTC m=+0.193397294 container init 6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_neumann, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  1 10:01:39 np0005604375 podman[167179]: 2026-02-01 15:01:39.031572536 +0000 UTC m=+0.200513623 container start 6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  1 10:01:39 np0005604375 loving_neumann[167354]: 167 167
Feb  1 10:01:39 np0005604375 systemd[1]: libpod-6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212.scope: Deactivated successfully.
Feb  1 10:01:39 np0005604375 podman[167179]: 2026-02-01 15:01:39.147531178 +0000 UTC m=+0.316472315 container attach 6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_neumann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 10:01:39 np0005604375 podman[167179]: 2026-02-01 15:01:39.148054422 +0000 UTC m=+0.316995529 container died 6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_neumann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:01:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  1 10:01:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:01:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:01:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:01:39 np0005604375 systemd[1]: var-lib-containers-storage-overlay-8956a1a0753ceeca025308bc2f83fabd481a167ed3e7ec205dd71bfcc8d96580-merged.mount: Deactivated successfully.
Feb  1 10:01:39 np0005604375 podman[167179]: 2026-02-01 15:01:39.345850449 +0000 UTC m=+0.514791566 container remove 6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_neumann, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  1 10:01:39 np0005604375 systemd[1]: libpod-conmon-6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212.scope: Deactivated successfully.
Feb  1 10:01:39 np0005604375 podman[167845]: 2026-02-01 15:01:39.53987969 +0000 UTC m=+0.077649679 container create 1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:01:39 np0005604375 systemd[1]: Started libpod-conmon-1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a.scope.
Feb  1 10:01:39 np0005604375 podman[167845]: 2026-02-01 15:01:39.52312038 +0000 UTC m=+0.060890419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:01:39 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:01:39 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780e2a1ecc39c935a0ad2556b87b7ee7b89fc1c79faad53c6faa66a7b5a905e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:01:39 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780e2a1ecc39c935a0ad2556b87b7ee7b89fc1c79faad53c6faa66a7b5a905e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:01:39 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780e2a1ecc39c935a0ad2556b87b7ee7b89fc1c79faad53c6faa66a7b5a905e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:01:39 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780e2a1ecc39c935a0ad2556b87b7ee7b89fc1c79faad53c6faa66a7b5a905e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:01:39 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780e2a1ecc39c935a0ad2556b87b7ee7b89fc1c79faad53c6faa66a7b5a905e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:01:39 np0005604375 podman[167845]: 2026-02-01 15:01:39.632702542 +0000 UTC m=+0.170472531 container init 1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb  1 10:01:39 np0005604375 podman[167845]: 2026-02-01 15:01:39.64119256 +0000 UTC m=+0.178962589 container start 1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pare, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:01:39 np0005604375 podman[167845]: 2026-02-01 15:01:39.645229434 +0000 UTC m=+0.182999423 container attach 1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:01:39 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:40 np0005604375 confident_pare[167989]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:01:40 np0005604375 confident_pare[167989]: --> All data devices are unavailable
Feb  1 10:01:40 np0005604375 systemd[1]: libpod-1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a.scope: Deactivated successfully.
Feb  1 10:01:40 np0005604375 podman[167845]: 2026-02-01 15:01:40.065817907 +0000 UTC m=+0.603587926 container died 1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  1 10:01:40 np0005604375 systemd[1]: var-lib-containers-storage-overlay-780e2a1ecc39c935a0ad2556b87b7ee7b89fc1c79faad53c6faa66a7b5a905e7-merged.mount: Deactivated successfully.
Feb  1 10:01:40 np0005604375 podman[167845]: 2026-02-01 15:01:40.105818148 +0000 UTC m=+0.643588137 container remove 1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  1 10:01:40 np0005604375 systemd[1]: libpod-conmon-1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a.scope: Deactivated successfully.
Feb  1 10:01:40 np0005604375 podman[168848]: 2026-02-01 15:01:40.47178271 +0000 UTC m=+0.032695388 container create 2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lederberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:01:40 np0005604375 systemd[1]: Started libpod-conmon-2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf.scope.
Feb  1 10:01:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:01:40 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:01:40 np0005604375 podman[168848]: 2026-02-01 15:01:40.542338898 +0000 UTC m=+0.103251606 container init 2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lederberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:01:40 np0005604375 podman[168848]: 2026-02-01 15:01:40.547571415 +0000 UTC m=+0.108484103 container start 2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:01:40 np0005604375 jolly_lederberg[168921]: 167 167
Feb  1 10:01:40 np0005604375 systemd[1]: libpod-2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf.scope: Deactivated successfully.
Feb  1 10:01:40 np0005604375 podman[168848]: 2026-02-01 15:01:40.552133783 +0000 UTC m=+0.113046481 container attach 2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lederberg, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  1 10:01:40 np0005604375 podman[168848]: 2026-02-01 15:01:40.552508414 +0000 UTC m=+0.113421092 container died 2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lederberg, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:01:40 np0005604375 podman[168848]: 2026-02-01 15:01:40.457745536 +0000 UTC m=+0.018658234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:01:40 np0005604375 systemd[1]: var-lib-containers-storage-overlay-e82edcebe7af1e35ef35394bf42fc8017e4c31d68b0194216a0fbfe3789bd57b-merged.mount: Deactivated successfully.
Feb  1 10:01:40 np0005604375 podman[168848]: 2026-02-01 15:01:40.578581485 +0000 UTC m=+0.139494163 container remove 2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  1 10:01:40 np0005604375 systemd[1]: libpod-conmon-2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf.scope: Deactivated successfully.
Feb  1 10:01:40 np0005604375 podman[169116]: 2026-02-01 15:01:40.703648482 +0000 UTC m=+0.031059192 container create 2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  1 10:01:40 np0005604375 systemd[1]: Started libpod-conmon-2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b.scope.
Feb  1 10:01:40 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:01:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07009ec560a4e9b636f15e2ef7ec07c2be4bb4460e79b7faa2f28e4e1e71c99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:01:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07009ec560a4e9b636f15e2ef7ec07c2be4bb4460e79b7faa2f28e4e1e71c99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:01:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07009ec560a4e9b636f15e2ef7ec07c2be4bb4460e79b7faa2f28e4e1e71c99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:01:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07009ec560a4e9b636f15e2ef7ec07c2be4bb4460e79b7faa2f28e4e1e71c99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:01:40 np0005604375 podman[169116]: 2026-02-01 15:01:40.688920699 +0000 UTC m=+0.016331419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:01:40 np0005604375 podman[169116]: 2026-02-01 15:01:40.78954149 +0000 UTC m=+0.116952280 container init 2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  1 10:01:40 np0005604375 podman[169116]: 2026-02-01 15:01:40.797874584 +0000 UTC m=+0.125285294 container start 2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:01:40 np0005604375 podman[169116]: 2026-02-01 15:01:40.80274453 +0000 UTC m=+0.130155340 container attach 2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]: {
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:    "0": [
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:        {
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "devices": [
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "/dev/loop3"
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            ],
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_name": "ceph_lv0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_size": "21470642176",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "name": "ceph_lv0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "tags": {
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.cluster_name": "ceph",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.crush_device_class": "",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.encrypted": "0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.objectstore": "bluestore",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.osd_id": "0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.type": "block",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.vdo": "0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.with_tpm": "0"
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            },
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "type": "block",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "vg_name": "ceph_vg0"
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:        }
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:    ],
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:    "1": [
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:        {
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "devices": [
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "/dev/loop4"
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            ],
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_name": "ceph_lv1",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_size": "21470642176",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "name": "ceph_lv1",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "tags": {
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.cluster_name": "ceph",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.crush_device_class": "",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.encrypted": "0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.objectstore": "bluestore",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.osd_id": "1",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.type": "block",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.vdo": "0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.with_tpm": "0"
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            },
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "type": "block",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "vg_name": "ceph_vg1"
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:        }
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:    ],
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:    "2": [
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:        {
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "devices": [
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "/dev/loop5"
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            ],
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_name": "ceph_lv2",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_size": "21470642176",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "name": "ceph_lv2",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "tags": {
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.cluster_name": "ceph",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.crush_device_class": "",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.encrypted": "0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.objectstore": "bluestore",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.osd_id": "2",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.type": "block",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.vdo": "0",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:                "ceph.with_tpm": "0"
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            },
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "type": "block",
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:            "vg_name": "ceph_vg2"
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:        }
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]:    ]
Feb  1 10:01:41 np0005604375 pedantic_panini[169212]: }
Feb  1 10:01:41 np0005604375 systemd[1]: libpod-2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b.scope: Deactivated successfully.
Feb  1 10:01:41 np0005604375 podman[169116]: 2026-02-01 15:01:41.089665066 +0000 UTC m=+0.417075806 container died 2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:01:41 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a07009ec560a4e9b636f15e2ef7ec07c2be4bb4460e79b7faa2f28e4e1e71c99-merged.mount: Deactivated successfully.
Feb  1 10:01:41 np0005604375 podman[169116]: 2026-02-01 15:01:41.134716759 +0000 UTC m=+0.462127469 container remove 2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  1 10:01:41 np0005604375 systemd[1]: libpod-conmon-2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b.scope: Deactivated successfully.
Feb  1 10:01:41 np0005604375 podman[169960]: 2026-02-01 15:01:41.527944966 +0000 UTC m=+0.034736555 container create 3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_khorana, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 10:01:41 np0005604375 systemd[1]: Started libpod-conmon-3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890.scope.
Feb  1 10:01:41 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:01:41 np0005604375 podman[169960]: 2026-02-01 15:01:41.591768515 +0000 UTC m=+0.098560124 container init 3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:01:41 np0005604375 podman[169960]: 2026-02-01 15:01:41.595179501 +0000 UTC m=+0.101971110 container start 3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_khorana, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 10:01:41 np0005604375 podman[169960]: 2026-02-01 15:01:41.598267108 +0000 UTC m=+0.105058747 container attach 3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:01:41 np0005604375 eloquent_khorana[170057]: 167 167
Feb  1 10:01:41 np0005604375 systemd[1]: libpod-3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890.scope: Deactivated successfully.
Feb  1 10:01:41 np0005604375 podman[169960]: 2026-02-01 15:01:41.599683317 +0000 UTC m=+0.106474916 container died 3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_khorana, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:01:41 np0005604375 podman[169960]: 2026-02-01 15:01:41.512632736 +0000 UTC m=+0.019424355 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:01:41 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c1ce00a07f0d0f5bb6ad3ac6ef0cd8d84dd392be34690e6ce535dde83ffd93c3-merged.mount: Deactivated successfully.
Feb  1 10:01:41 np0005604375 podman[169960]: 2026-02-01 15:01:41.629094252 +0000 UTC m=+0.135885861 container remove 3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_khorana, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:01:41 np0005604375 systemd[1]: libpod-conmon-3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890.scope: Deactivated successfully.
Feb  1 10:01:41 np0005604375 podman[170233]: 2026-02-01 15:01:41.759096327 +0000 UTC m=+0.044866429 container create eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_jemison, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  1 10:01:41 np0005604375 systemd[1]: Started libpod-conmon-eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565.scope.
Feb  1 10:01:41 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:01:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf090da88caf025f311ad1c49f21e3a0218a523ed3de86f2011f00841783f695/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:01:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf090da88caf025f311ad1c49f21e3a0218a523ed3de86f2011f00841783f695/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:01:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf090da88caf025f311ad1c49f21e3a0218a523ed3de86f2011f00841783f695/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:01:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf090da88caf025f311ad1c49f21e3a0218a523ed3de86f2011f00841783f695/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:01:41 np0005604375 podman[170233]: 2026-02-01 15:01:41.737107081 +0000 UTC m=+0.022877203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:01:41 np0005604375 podman[170233]: 2026-02-01 15:01:41.841444137 +0000 UTC m=+0.127214289 container init eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_jemison, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:01:41 np0005604375 podman[170233]: 2026-02-01 15:01:41.847810735 +0000 UTC m=+0.133580827 container start eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  1 10:01:41 np0005604375 podman[170233]: 2026-02-01 15:01:41.850758328 +0000 UTC m=+0.136528530 container attach eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_jemison, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:01:41 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:42 np0005604375 lvm[170973]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:01:42 np0005604375 lvm[170971]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:01:42 np0005604375 lvm[170971]: VG ceph_vg0 finished
Feb  1 10:01:42 np0005604375 lvm[170973]: VG ceph_vg1 finished
Feb  1 10:01:42 np0005604375 lvm[170984]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:01:42 np0005604375 lvm[170984]: VG ceph_vg2 finished
Feb  1 10:01:42 np0005604375 practical_jemison[170327]: {}
Feb  1 10:01:42 np0005604375 systemd[1]: libpod-eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565.scope: Deactivated successfully.
Feb  1 10:01:42 np0005604375 podman[170233]: 2026-02-01 15:01:42.531193858 +0000 UTC m=+0.816963950 container died eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 10:01:42 np0005604375 systemd[1]: var-lib-containers-storage-overlay-bf090da88caf025f311ad1c49f21e3a0218a523ed3de86f2011f00841783f695-merged.mount: Deactivated successfully.
Feb  1 10:01:42 np0005604375 podman[170233]: 2026-02-01 15:01:42.56873744 +0000 UTC m=+0.854507522 container remove eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_jemison, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  1 10:01:42 np0005604375 systemd[1]: libpod-conmon-eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565.scope: Deactivated successfully.
Feb  1 10:01:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:01:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:01:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:01:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:01:43 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:01:43 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:01:43 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:01:45 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:01:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:01:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:01:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:01:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:01:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:01:49 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:01:51 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:53 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:01:55 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:57 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:01:59 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:02:01 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  1 10:02:03 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  1 10:02:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:02:05 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  1 10:02:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:02:07.794 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:02:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:02:07.795 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:02:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:02:07.795 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:02:07 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  1 10:02:09 np0005604375 podman[180249]: 2026-02-01 15:02:09.04848079 +0000 UTC m=+0.127527387 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb  1 10:02:09 np0005604375 podman[180250]: 2026-02-01 15:02:09.083671447 +0000 UTC m=+0.166841480 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Feb  1 10:02:09 np0005604375 kernel: SELinux:  Converting 2778 SID table entries...
Feb  1 10:02:09 np0005604375 kernel: SELinux:  policy capability network_peer_controls=1
Feb  1 10:02:09 np0005604375 kernel: SELinux:  policy capability open_perms=1
Feb  1 10:02:09 np0005604375 kernel: SELinux:  policy capability extended_socket_class=1
Feb  1 10:02:09 np0005604375 kernel: SELinux:  policy capability always_check_network=0
Feb  1 10:02:09 np0005604375 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  1 10:02:09 np0005604375 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  1 10:02:09 np0005604375 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  1 10:02:09 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  1 10:02:10 np0005604375 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb  1 10:02:10 np0005604375 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Feb  1 10:02:10 np0005604375 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb  1 10:02:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:02:11 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  1 10:02:13 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:02:15 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:17 np0005604375 systemd[1]: Stopping OpenSSH server daemon...
Feb  1 10:02:17 np0005604375 systemd[1]: sshd.service: Deactivated successfully.
Feb  1 10:02:17 np0005604375 systemd[1]: Stopped OpenSSH server daemon.
Feb  1 10:02:17 np0005604375 systemd[1]: sshd.service: Consumed 2.298s CPU time, read 32.0K from disk, written 16.0K to disk.
Feb  1 10:02:17 np0005604375 systemd[1]: Stopped target sshd-keygen.target.
Feb  1 10:02:17 np0005604375 systemd[1]: Stopping sshd-keygen.target...
Feb  1 10:02:17 np0005604375 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  1 10:02:17 np0005604375 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  1 10:02:17 np0005604375 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  1 10:02:17 np0005604375 systemd[1]: Reached target sshd-keygen.target.
Feb  1 10:02:17 np0005604375 systemd[1]: Starting OpenSSH server daemon...
Feb  1 10:02:17 np0005604375 systemd[1]: Started OpenSSH server daemon.
Feb  1 10:02:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:02:17
Feb  1 10:02:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:02:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:02:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Feb  1 10:02:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:02:17 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:02:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:02:19 np0005604375 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  1 10:02:19 np0005604375 systemd[1]: Starting man-db-cache-update.service...
Feb  1 10:02:19 np0005604375 systemd[1]: Reloading.
Feb  1 10:02:19 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:02:19 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:02:19 np0005604375 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  1 10:02:19 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:02:21 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:23 np0005604375 python3.9[187552]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  1 10:02:23 np0005604375 systemd[1]: Reloading.
Feb  1 10:02:23 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:02:23 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:02:23 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:24 np0005604375 python3.9[189157]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  1 10:02:24 np0005604375 systemd[1]: Reloading.
Feb  1 10:02:24 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:02:24 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:02:25 np0005604375 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  1 10:02:25 np0005604375 systemd[1]: Finished man-db-cache-update.service.
Feb  1 10:02:25 np0005604375 systemd[1]: man-db-cache-update.service: Consumed 7.549s CPU time.
Feb  1 10:02:25 np0005604375 systemd[1]: run-r25c73ef25da04b0aa43bef90637def35.service: Deactivated successfully.
Feb  1 10:02:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:02:25 np0005604375 python3.9[190328]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  1 10:02:25 np0005604375 systemd[1]: Reloading.
Feb  1 10:02:25 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:02:25 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:02:25 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:26 np0005604375 python3.9[190517]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  1 10:02:26 np0005604375 systemd[1]: Reloading.
Feb  1 10:02:26 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:02:26 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:02:27 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:02:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:02:28 np0005604375 python3.9[190706]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:28 np0005604375 systemd[1]: Reloading.
Feb  1 10:02:28 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:02:28 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:02:29 np0005604375 python3.9[190898]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:29 np0005604375 systemd[1]: Reloading.
Feb  1 10:02:29 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:02:29 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:02:29 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:30 np0005604375 python3.9[191088]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:30 np0005604375 systemd[1]: Reloading.
Feb  1 10:02:30 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:02:30 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:02:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:02:31 np0005604375 python3.9[191278]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:31 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:32 np0005604375 python3.9[191433]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:32 np0005604375 systemd[1]: Reloading.
Feb  1 10:02:32 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:02:32 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:02:33 np0005604375 python3.9[191622]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  1 10:02:33 np0005604375 systemd[1]: Reloading.
Feb  1 10:02:33 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:02:33 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:02:33 np0005604375 systemd[1]: Listening on libvirt proxy daemon socket.
Feb  1 10:02:33 np0005604375 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Feb  1 10:02:33 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:34 np0005604375 python3.9[191815]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:35 np0005604375 python3.9[191970]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:02:35 np0005604375 python3.9[192125]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:35 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:36 np0005604375 python3.9[192280]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:37 np0005604375 python3.9[192435]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:37 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:38 np0005604375 python3.9[192590]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:39 np0005604375 python3.9[192745]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:39 np0005604375 podman[192747]: 2026-02-01 15:02:39.206680978 +0000 UTC m=+0.085949699 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Feb  1 10:02:39 np0005604375 podman[192748]: 2026-02-01 15:02:39.209179048 +0000 UTC m=+0.089437407 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Feb  1 10:02:39 np0005604375 python3.9[192944]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:39 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:02:40 np0005604375 python3.9[193099]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:41 np0005604375 python3.9[193254]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:41 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:42 np0005604375 python3.9[193409]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:43 np0005604375 python3.9[193614]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:43 np0005604375 podman[193661]: 2026-02-01 15:02:43.151062294 +0000 UTC m=+0.055468975 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  1 10:02:43 np0005604375 podman[193661]: 2026-02-01 15:02:43.271636913 +0000 UTC m=+0.176043594 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:02:43 np0005604375 python3.9[193928]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:02:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:02:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:02:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:02:43 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:02:44 np0005604375 python3.9[194219]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  1 10:02:44 np0005604375 podman[194327]: 2026-02-01 15:02:44.85480569 +0000 UTC m=+0.050201387 container create 411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  1 10:02:44 np0005604375 systemd[1]: Started libpod-conmon-411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf.scope.
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:02:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:02:44 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:02:44 np0005604375 podman[194327]: 2026-02-01 15:02:44.83622591 +0000 UTC m=+0.031621627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:02:44 np0005604375 podman[194327]: 2026-02-01 15:02:44.941736876 +0000 UTC m=+0.137132593 container init 411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tharp, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  1 10:02:44 np0005604375 podman[194327]: 2026-02-01 15:02:44.947255851 +0000 UTC m=+0.142651558 container start 411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 10:02:44 np0005604375 podman[194327]: 2026-02-01 15:02:44.951388386 +0000 UTC m=+0.146784103 container attach 411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tharp, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:02:44 np0005604375 zealous_tharp[194343]: 167 167
Feb  1 10:02:44 np0005604375 systemd[1]: libpod-411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf.scope: Deactivated successfully.
Feb  1 10:02:44 np0005604375 podman[194327]: 2026-02-01 15:02:44.953162516 +0000 UTC m=+0.148558223 container died 411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:02:44 np0005604375 systemd[1]: var-lib-containers-storage-overlay-b9d484e6f7448fb5b86f659944dd18358f1ff4e04f2277f55474400e65601736-merged.mount: Deactivated successfully.
Feb  1 10:02:45 np0005604375 podman[194327]: 2026-02-01 15:02:45.012738955 +0000 UTC m=+0.208134692 container remove 411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tharp, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:02:45 np0005604375 systemd[1]: libpod-conmon-411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf.scope: Deactivated successfully.
Feb  1 10:02:45 np0005604375 podman[194408]: 2026-02-01 15:02:45.176093622 +0000 UTC m=+0.062093801 container create 70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  1 10:02:45 np0005604375 systemd[1]: Started libpod-conmon-70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf.scope.
Feb  1 10:02:45 np0005604375 podman[194408]: 2026-02-01 15:02:45.148986543 +0000 UTC m=+0.034986812 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:02:45 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:02:45 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76f8e318d14036bff7a67e6ac3789cf6151235b915ad4a335577dd2822a1a7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:02:45 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76f8e318d14036bff7a67e6ac3789cf6151235b915ad4a335577dd2822a1a7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:02:45 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76f8e318d14036bff7a67e6ac3789cf6151235b915ad4a335577dd2822a1a7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:02:45 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76f8e318d14036bff7a67e6ac3789cf6151235b915ad4a335577dd2822a1a7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:02:45 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76f8e318d14036bff7a67e6ac3789cf6151235b915ad4a335577dd2822a1a7a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:02:45 np0005604375 podman[194408]: 2026-02-01 15:02:45.295330983 +0000 UTC m=+0.181331232 container init 70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:02:45 np0005604375 podman[194408]: 2026-02-01 15:02:45.310957971 +0000 UTC m=+0.196958150 container start 70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:02:45 np0005604375 podman[194408]: 2026-02-01 15:02:45.315521619 +0000 UTC m=+0.201521878 container attach 70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  1 10:02:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:02:45 np0005604375 python3.9[194515]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:02:45 np0005604375 reverent_buck[194458]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:02:45 np0005604375 reverent_buck[194458]: --> All data devices are unavailable
Feb  1 10:02:45 np0005604375 systemd[1]: libpod-70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf.scope: Deactivated successfully.
Feb  1 10:02:45 np0005604375 podman[194408]: 2026-02-01 15:02:45.792893424 +0000 UTC m=+0.678893603 container died 70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:02:45 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a76f8e318d14036bff7a67e6ac3789cf6151235b915ad4a335577dd2822a1a7a-merged.mount: Deactivated successfully.
Feb  1 10:02:45 np0005604375 podman[194408]: 2026-02-01 15:02:45.832504424 +0000 UTC m=+0.718504613 container remove 70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:02:45 np0005604375 systemd[1]: libpod-conmon-70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf.scope: Deactivated successfully.
Feb  1 10:02:45 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:46 np0005604375 python3.9[194719]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:02:46 np0005604375 podman[194763]: 2026-02-01 15:02:46.190609068 +0000 UTC m=+0.031862124 container create c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  1 10:02:46 np0005604375 systemd[1]: Started libpod-conmon-c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5.scope.
Feb  1 10:02:46 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:02:46 np0005604375 podman[194763]: 2026-02-01 15:02:46.254257321 +0000 UTC m=+0.095510407 container init c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Feb  1 10:02:46 np0005604375 podman[194763]: 2026-02-01 15:02:46.259090266 +0000 UTC m=+0.100343312 container start c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  1 10:02:46 np0005604375 awesome_nobel[194806]: 167 167
Feb  1 10:02:46 np0005604375 systemd[1]: libpod-c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5.scope: Deactivated successfully.
Feb  1 10:02:46 np0005604375 podman[194763]: 2026-02-01 15:02:46.26528393 +0000 UTC m=+0.106536996 container attach c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  1 10:02:46 np0005604375 podman[194763]: 2026-02-01 15:02:46.265599219 +0000 UTC m=+0.106852275 container died c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:02:46 np0005604375 podman[194763]: 2026-02-01 15:02:46.176021489 +0000 UTC m=+0.017274565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:02:46 np0005604375 systemd[1]: var-lib-containers-storage-overlay-0eb434392060e30875fb31050998934718d79084d3fbba4bd767d8b942f678f8-merged.mount: Deactivated successfully.
Feb  1 10:02:46 np0005604375 podman[194763]: 2026-02-01 15:02:46.326534206 +0000 UTC m=+0.167787262 container remove c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:02:46 np0005604375 systemd[1]: libpod-conmon-c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5.scope: Deactivated successfully.
Feb  1 10:02:46 np0005604375 podman[194903]: 2026-02-01 15:02:46.434176172 +0000 UTC m=+0.034512518 container create c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_sutherland, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  1 10:02:46 np0005604375 systemd[1]: Started libpod-conmon-c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594.scope.
Feb  1 10:02:46 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:02:46 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38479ad6c478f8c25edfb0ed616e062d3a3e7962c948b5bfdfafe8b7ef408c2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:02:46 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38479ad6c478f8c25edfb0ed616e062d3a3e7962c948b5bfdfafe8b7ef408c2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:02:46 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38479ad6c478f8c25edfb0ed616e062d3a3e7962c948b5bfdfafe8b7ef408c2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:02:46 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38479ad6c478f8c25edfb0ed616e062d3a3e7962c948b5bfdfafe8b7ef408c2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:02:46 np0005604375 podman[194903]: 2026-02-01 15:02:46.509880813 +0000 UTC m=+0.110217249 container init c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_sutherland, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:02:46 np0005604375 podman[194903]: 2026-02-01 15:02:46.417916437 +0000 UTC m=+0.018252803 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:02:46 np0005604375 podman[194903]: 2026-02-01 15:02:46.514437941 +0000 UTC m=+0.114774287 container start c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  1 10:02:46 np0005604375 podman[194903]: 2026-02-01 15:02:46.521725015 +0000 UTC m=+0.122061371 container attach c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_sutherland, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  1 10:02:46 np0005604375 python3.9[194969]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]: {
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:    "0": [
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:        {
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "devices": [
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "/dev/loop3"
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            ],
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_name": "ceph_lv0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_size": "21470642176",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "name": "ceph_lv0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "tags": {
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.cluster_name": "ceph",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.crush_device_class": "",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.encrypted": "0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.objectstore": "bluestore",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.osd_id": "0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.type": "block",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.vdo": "0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.with_tpm": "0"
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            },
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "type": "block",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "vg_name": "ceph_vg0"
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:        }
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:    ],
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:    "1": [
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:        {
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "devices": [
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "/dev/loop4"
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            ],
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_name": "ceph_lv1",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_size": "21470642176",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "name": "ceph_lv1",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "tags": {
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.cluster_name": "ceph",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.crush_device_class": "",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.encrypted": "0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.objectstore": "bluestore",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.osd_id": "1",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.type": "block",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.vdo": "0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.with_tpm": "0"
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            },
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "type": "block",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "vg_name": "ceph_vg1"
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:        }
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:    ],
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:    "2": [
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:        {
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "devices": [
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "/dev/loop5"
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            ],
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_name": "ceph_lv2",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_size": "21470642176",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "name": "ceph_lv2",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "tags": {
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.cluster_name": "ceph",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.crush_device_class": "",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.encrypted": "0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.objectstore": "bluestore",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.osd_id": "2",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.type": "block",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.vdo": "0",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:                "ceph.with_tpm": "0"
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            },
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "type": "block",
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:            "vg_name": "ceph_vg2"
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:        }
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]:    ]
Feb  1 10:02:46 np0005604375 goofy_sutherland[194966]: }
Feb  1 10:02:46 np0005604375 systemd[1]: libpod-c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594.scope: Deactivated successfully.
Feb  1 10:02:46 np0005604375 podman[194903]: 2026-02-01 15:02:46.820873897 +0000 UTC m=+0.421210273 container died c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_sutherland, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:02:46 np0005604375 systemd[1]: var-lib-containers-storage-overlay-38479ad6c478f8c25edfb0ed616e062d3a3e7962c948b5bfdfafe8b7ef408c2a-merged.mount: Deactivated successfully.
Feb  1 10:02:46 np0005604375 podman[194903]: 2026-02-01 15:02:46.858049979 +0000 UTC m=+0.458386335 container remove c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_sutherland, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:02:46 np0005604375 systemd[1]: libpod-conmon-c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594.scope: Deactivated successfully.
Feb  1 10:02:47 np0005604375 podman[195203]: 2026-02-01 15:02:47.236748499 +0000 UTC m=+0.032388588 container create 34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Feb  1 10:02:47 np0005604375 python3.9[195188]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:02:47 np0005604375 systemd[1]: Started libpod-conmon-34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9.scope.
Feb  1 10:02:47 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:02:47 np0005604375 podman[195203]: 2026-02-01 15:02:47.309141368 +0000 UTC m=+0.104781467 container init 34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:02:47 np0005604375 podman[195203]: 2026-02-01 15:02:47.315680871 +0000 UTC m=+0.111320940 container start 34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_colden, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  1 10:02:47 np0005604375 nostalgic_colden[195220]: 167 167
Feb  1 10:02:47 np0005604375 podman[195203]: 2026-02-01 15:02:47.223866358 +0000 UTC m=+0.019506437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:02:47 np0005604375 systemd[1]: libpod-34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9.scope: Deactivated successfully.
Feb  1 10:02:47 np0005604375 podman[195203]: 2026-02-01 15:02:47.320413053 +0000 UTC m=+0.116053142 container attach 34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  1 10:02:47 np0005604375 podman[195203]: 2026-02-01 15:02:47.321036491 +0000 UTC m=+0.116676590 container died 34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:02:47 np0005604375 systemd[1]: var-lib-containers-storage-overlay-568e48b9b41534189804aea9e9676e53116caa5a5e52fcf45bb1ac28ce869dcd-merged.mount: Deactivated successfully.
Feb  1 10:02:47 np0005604375 podman[195203]: 2026-02-01 15:02:47.359612562 +0000 UTC m=+0.155252661 container remove 34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:02:47 np0005604375 systemd[1]: libpod-conmon-34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9.scope: Deactivated successfully.
Feb  1 10:02:47 np0005604375 podman[195300]: 2026-02-01 15:02:47.51766379 +0000 UTC m=+0.068015617 container create e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  1 10:02:47 np0005604375 systemd[1]: Started libpod-conmon-e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4.scope.
Feb  1 10:02:47 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:02:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1ed91bf2e79d53d6c6bbc5cbfabeced1e698a66e958310310e2d186d0c379c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:02:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1ed91bf2e79d53d6c6bbc5cbfabeced1e698a66e958310310e2d186d0c379c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:02:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1ed91bf2e79d53d6c6bbc5cbfabeced1e698a66e958310310e2d186d0c379c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:02:47 np0005604375 podman[195300]: 2026-02-01 15:02:47.495598292 +0000 UTC m=+0.045950159 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:02:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1ed91bf2e79d53d6c6bbc5cbfabeced1e698a66e958310310e2d186d0c379c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:02:47 np0005604375 podman[195300]: 2026-02-01 15:02:47.615582054 +0000 UTC m=+0.165933921 container init e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  1 10:02:47 np0005604375 podman[195300]: 2026-02-01 15:02:47.621897571 +0000 UTC m=+0.172249428 container start e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 10:02:47 np0005604375 podman[195300]: 2026-02-01 15:02:47.627394465 +0000 UTC m=+0.177746332 container attach e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:02:47 np0005604375 python3.9[195420]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:02:47 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:02:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:02:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:02:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:02:48 np0005604375 lvm[195615]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:02:48 np0005604375 lvm[195615]: VG ceph_vg0 finished
Feb  1 10:02:48 np0005604375 lvm[195619]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:02:48 np0005604375 lvm[195619]: VG ceph_vg1 finished
Feb  1 10:02:48 np0005604375 lvm[195647]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:02:48 np0005604375 lvm[195647]: VG ceph_vg2 finished
Feb  1 10:02:48 np0005604375 friendly_dubinsky[195369]: {}
Feb  1 10:02:48 np0005604375 systemd[1]: libpod-e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4.scope: Deactivated successfully.
Feb  1 10:02:48 np0005604375 systemd[1]: libpod-e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4.scope: Consumed 1.685s CPU time.
Feb  1 10:02:48 np0005604375 podman[195300]: 2026-02-01 15:02:48.695365557 +0000 UTC m=+1.245717404 container died e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:02:48 np0005604375 python3.9[195649]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:02:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:02:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:02:48 np0005604375 systemd[1]: var-lib-containers-storage-overlay-ca1ed91bf2e79d53d6c6bbc5cbfabeced1e698a66e958310310e2d186d0c379c-merged.mount: Deactivated successfully.
Feb  1 10:02:48 np0005604375 podman[195300]: 2026-02-01 15:02:48.75152573 +0000 UTC m=+1.301877547 container remove e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:02:48 np0005604375 systemd[1]: libpod-conmon-e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4.scope: Deactivated successfully.
Feb  1 10:02:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:02:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:02:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:02:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:02:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:02:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:02:49 np0005604375 python3.9[195838]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 10:02:49 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:50 np0005604375 python3.9[195990]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:02:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:02:51 np0005604375 python3.9[196115]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958169.6471763-557-93934907013417/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:02:51 np0005604375 python3.9[196267]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:02:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:52 np0005604375 python3.9[196392]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958171.2153761-557-263758877874439/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:02:52 np0005604375 python3.9[196544]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:02:53 np0005604375 python3.9[196669]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958172.352008-557-181641515215162/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:02:53 np0005604375 python3.9[196821]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:02:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:54 np0005604375 python3.9[196946]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958173.412101-557-190225944031075/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:02:55 np0005604375 python3.9[197098]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:02:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:02:55 np0005604375 python3.9[197223]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958174.6263795-557-21609612603065/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:02:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:56 np0005604375 python3.9[197375]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:02:56 np0005604375 python3.9[197500]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958175.7613988-557-189555856095843/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:02:57 np0005604375 python3.9[197652]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:02:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:02:58 np0005604375 python3.9[197775]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958177.0121386-557-239188825204576/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:02:58 np0005604375 python3.9[197927]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:02:59 np0005604375 python3.9[198052]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958178.165389-557-192249679704920/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:02:59 np0005604375 python3.9[198204]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Feb  1 10:03:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:03:00 np0005604375 python3.9[198357]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:01 np0005604375 python3.9[198509]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:01 np0005604375 python3.9[198661]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:02 np0005604375 python3.9[198813]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:03 np0005604375 python3.9[198965]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:03 np0005604375 python3.9[199117]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:04 np0005604375 python3.9[199269]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:04 np0005604375 python3.9[199421]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:05 np0005604375 python3.9[199573]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:03:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:06 np0005604375 python3.9[199725]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:06 np0005604375 python3.9[199877]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:07 np0005604375 python3.9[200029]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:03:07.794 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:03:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:03:07.796 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:03:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:03:07.796 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:03:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:08 np0005604375 python3.9[200181]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:08 np0005604375 python3.9[200333]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:09 np0005604375 python3.9[200485]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:09 np0005604375 podman[200580]: 2026-02-01 15:03:09.774164114 +0000 UTC m=+0.089736166 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Feb  1 10:03:09 np0005604375 podman[200581]: 2026-02-01 15:03:09.802014634 +0000 UTC m=+0.117249866 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3)
Feb  1 10:03:09 np0005604375 python3.9[200643]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958188.8937201-778-10792988600172/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:03:10 np0005604375 python3.9[200804]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:11 np0005604375 python3.9[200927]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958190.082023-778-42701562262123/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:11 np0005604375 python3.9[201079]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:12 np0005604375 python3.9[201202]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958191.2974386-778-154936042616494/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:12 np0005604375 python3.9[201354]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:13 np0005604375 python3.9[201477]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958192.4440532-778-177647756558666/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:13 np0005604375 python3.9[201629]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:14 np0005604375 python3.9[201752]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958193.576037-778-180287320938589/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:15 np0005604375 python3.9[201904]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:03:15 np0005604375 python3.9[202027]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958194.9049804-778-128318470169057/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:16 np0005604375 python3.9[202179]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:17 np0005604375 python3.9[202302]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958196.0187542-778-23688930473764/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:17 np0005604375 python3.9[202454]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:03:17
Feb  1 10:03:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:03:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:03:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.control', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'volumes']
Feb  1 10:03:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:18 np0005604375 python3.9[202577]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958197.192349-778-230536124195852/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:03:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:03:18 np0005604375 python3.9[202729]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:19 np0005604375 python3.9[202852]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958198.3556695-778-38913876432163/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:20 np0005604375 python3.9[203004]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:03:20 np0005604375 python3.9[203127]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958199.593427-778-128987776802921/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:21 np0005604375 python3.9[203279]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:21 np0005604375 python3.9[203402]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958200.6957617-778-67547785508081/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:22 np0005604375 python3.9[203554]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:22 np0005604375 python3.9[203677]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958201.8375354-778-94216886314342/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:23 np0005604375 python3.9[203829]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:24 np0005604375 python3.9[203952]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958202.9457405-778-189419772073890/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:24 np0005604375 python3.9[204104]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:25 np0005604375 python3.9[204227]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958204.1654148-778-91626845226628/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:03:25 np0005604375 python3.9[204377]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:03:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:26 np0005604375 python3.9[204532]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:03:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:03:28 np0005604375 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Feb  1 10:03:28 np0005604375 python3.9[204688]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:29 np0005604375 python3.9[204840]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:29 np0005604375 python3.9[204992]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:30 np0005604375 python3.9[205144]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:30 np0005604375 auditd[701]: Audit daemon rotating log files
Feb  1 10:03:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:03:30 np0005604375 python3.9[205296]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:31 np0005604375 python3.9[205448]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:32 np0005604375 python3.9[205600]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:32 np0005604375 python3.9[205752]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:33 np0005604375 python3.9[205904]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:34 np0005604375 python3.9[206056]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:34 np0005604375 python3.9[206208]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 10:03:34 np0005604375 systemd[1]: Reloading.
Feb  1 10:03:35 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:03:35 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:03:35 np0005604375 systemd[1]: Starting libvirt logging daemon socket...
Feb  1 10:03:35 np0005604375 systemd[1]: Listening on libvirt logging daemon socket.
Feb  1 10:03:35 np0005604375 systemd[1]: Starting libvirt logging daemon admin socket...
Feb  1 10:03:35 np0005604375 systemd[1]: Listening on libvirt logging daemon admin socket.
Feb  1 10:03:35 np0005604375 systemd[1]: Starting libvirt logging daemon...
Feb  1 10:03:35 np0005604375 systemd[1]: Started libvirt logging daemon.
Feb  1 10:03:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:03:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:36 np0005604375 python3.9[206402]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 10:03:36 np0005604375 systemd[1]: Reloading.
Feb  1 10:03:36 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:03:36 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:03:36 np0005604375 systemd[1]: Starting libvirt nodedev daemon socket...
Feb  1 10:03:36 np0005604375 systemd[1]: Listening on libvirt nodedev daemon socket.
Feb  1 10:03:36 np0005604375 systemd[1]: Starting libvirt nodedev daemon admin socket...
Feb  1 10:03:36 np0005604375 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Feb  1 10:03:36 np0005604375 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Feb  1 10:03:36 np0005604375 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Feb  1 10:03:36 np0005604375 systemd[1]: Starting libvirt nodedev daemon...
Feb  1 10:03:36 np0005604375 systemd[1]: Started libvirt nodedev daemon.
Feb  1 10:03:37 np0005604375 python3.9[206618]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 10:03:37 np0005604375 systemd[1]: Reloading.
Feb  1 10:03:37 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:03:37 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:03:37 np0005604375 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb  1 10:03:37 np0005604375 systemd[1]: Starting libvirt proxy daemon admin socket...
Feb  1 10:03:37 np0005604375 systemd[1]: Starting libvirt proxy daemon read-only socket...
Feb  1 10:03:37 np0005604375 systemd[1]: Listening on libvirt proxy daemon admin socket.
Feb  1 10:03:37 np0005604375 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Feb  1 10:03:37 np0005604375 systemd[1]: Starting libvirt proxy daemon...
Feb  1 10:03:37 np0005604375 systemd[1]: Started libvirt proxy daemon.
Feb  1 10:03:37 np0005604375 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb  1 10:03:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:38 np0005604375 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Feb  1 10:03:38 np0005604375 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Feb  1 10:03:38 np0005604375 python3.9[206836]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 10:03:38 np0005604375 systemd[1]: Reloading.
Feb  1 10:03:38 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:03:38 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:03:38 np0005604375 systemd[1]: Listening on libvirt locking daemon socket.
Feb  1 10:03:38 np0005604375 systemd[1]: Starting libvirt QEMU daemon socket...
Feb  1 10:03:38 np0005604375 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb  1 10:03:38 np0005604375 systemd[1]: Starting Virtual Machine and Container Registration Service...
Feb  1 10:03:38 np0005604375 systemd[1]: Listening on libvirt QEMU daemon socket.
Feb  1 10:03:38 np0005604375 systemd[1]: Starting libvirt QEMU daemon admin socket...
Feb  1 10:03:38 np0005604375 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Feb  1 10:03:38 np0005604375 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Feb  1 10:03:38 np0005604375 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Feb  1 10:03:38 np0005604375 systemd[1]: Started Virtual Machine and Container Registration Service.
Feb  1 10:03:38 np0005604375 systemd[1]: Starting libvirt QEMU daemon...
Feb  1 10:03:38 np0005604375 systemd[1]: Started libvirt QEMU daemon.
Feb  1 10:03:39 np0005604375 setroubleshoot[206654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 18fce6ee-04e5-42cf-97df-eb8e56d9670c
Feb  1 10:03:39 np0005604375 setroubleshoot[206654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb  1 10:03:39 np0005604375 setroubleshoot[206654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 18fce6ee-04e5-42cf-97df-eb8e56d9670c
Feb  1 10:03:39 np0005604375 setroubleshoot[206654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb  1 10:03:39 np0005604375 python3.9[207053]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 10:03:39 np0005604375 systemd[1]: Reloading.
Feb  1 10:03:39 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:03:39 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:03:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:40 np0005604375 systemd[1]: Starting libvirt secret daemon socket...
Feb  1 10:03:40 np0005604375 systemd[1]: Listening on libvirt secret daemon socket.
Feb  1 10:03:40 np0005604375 systemd[1]: Starting libvirt secret daemon admin socket...
Feb  1 10:03:40 np0005604375 systemd[1]: Starting libvirt secret daemon read-only socket...
Feb  1 10:03:40 np0005604375 systemd[1]: Listening on libvirt secret daemon admin socket.
Feb  1 10:03:40 np0005604375 systemd[1]: Listening on libvirt secret daemon read-only socket.
Feb  1 10:03:40 np0005604375 systemd[1]: Starting libvirt secret daemon...
Feb  1 10:03:40 np0005604375 systemd[1]: Started libvirt secret daemon.
Feb  1 10:03:40 np0005604375 podman[207090]: 2026-02-01 15:03:40.154213678 +0000 UTC m=+0.129714299 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Feb  1 10:03:40 np0005604375 podman[207091]: 2026-02-01 15:03:40.159902188 +0000 UTC m=+0.137732575 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  1 10:03:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:03:40 np0005604375 python3.9[207307]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:41 np0005604375 python3.9[207459]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  1 10:03:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:42 np0005604375 python3.9[207611]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:03:42 np0005604375 python3.9[207765]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  1 10:03:43 np0005604375 python3.9[207915]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:44 np0005604375 python3.9[208036]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958223.1088572-1136-164663582789640/.source.xml follow=False _original_basename=secret.xml.j2 checksum=0167405d65199c76e23e57ae481d8cd31475ef34 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:44 np0005604375 python3.9[208188]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:03:45 np0005604375 python3.9[208350]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:03:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:47 np0005604375 python3.9[208813]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:03:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:03:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:03:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:03:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:03:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:03:49 np0005604375 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Feb  1 10:03:49 np0005604375 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.054s CPU time.
Feb  1 10:03:49 np0005604375 python3.9[209015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:49 np0005604375 systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:03:49 np0005604375 python3.9[209219]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958228.1597445-1191-203415897450854/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:49 np0005604375 podman[209232]: 2026-02-01 15:03:49.775443383 +0000 UTC m=+0.045056998 container create b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  1 10:03:49 np0005604375 systemd[1]: Started libpod-conmon-b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326.scope.
Feb  1 10:03:49 np0005604375 podman[209232]: 2026-02-01 15:03:49.750722927 +0000 UTC m=+0.020336622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:03:49 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:03:49 np0005604375 podman[209232]: 2026-02-01 15:03:49.8706165 +0000 UTC m=+0.140230165 container init b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:03:49 np0005604375 podman[209232]: 2026-02-01 15:03:49.876119694 +0000 UTC m=+0.145733309 container start b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  1 10:03:49 np0005604375 podman[209232]: 2026-02-01 15:03:49.879574482 +0000 UTC m=+0.149188117 container attach b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:03:49 np0005604375 loving_euler[209273]: 167 167
Feb  1 10:03:49 np0005604375 systemd[1]: libpod-b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326.scope: Deactivated successfully.
Feb  1 10:03:49 np0005604375 podman[209232]: 2026-02-01 15:03:49.882775812 +0000 UTC m=+0.152389437 container died b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:03:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:03:49 np0005604375 systemd[1]: var-lib-containers-storage-overlay-458779b26d187319493e734260cbc71939ba74bc245236c516c69197eba97c84-merged.mount: Deactivated successfully.
Feb  1 10:03:49 np0005604375 podman[209232]: 2026-02-01 15:03:49.921976584 +0000 UTC m=+0.191590209 container remove b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  1 10:03:49 np0005604375 systemd[1]: libpod-conmon-b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326.scope: Deactivated successfully.
Feb  1 10:03:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:50 np0005604375 podman[209320]: 2026-02-01 15:03:50.08891841 +0000 UTC m=+0.053346982 container create ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:03:50 np0005604375 podman[209320]: 2026-02-01 15:03:50.067455336 +0000 UTC m=+0.031883888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:03:50 np0005604375 systemd[1]: Started libpod-conmon-ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b.scope.
Feb  1 10:03:50 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:03:50 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903616566b3c763b4b58c8f385acebc8aa2f1d07c5abcf27b7acd02a290fedba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:03:50 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903616566b3c763b4b58c8f385acebc8aa2f1d07c5abcf27b7acd02a290fedba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:03:50 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903616566b3c763b4b58c8f385acebc8aa2f1d07c5abcf27b7acd02a290fedba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:03:50 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903616566b3c763b4b58c8f385acebc8aa2f1d07c5abcf27b7acd02a290fedba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:03:50 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903616566b3c763b4b58c8f385acebc8aa2f1d07c5abcf27b7acd02a290fedba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:03:50 np0005604375 podman[209320]: 2026-02-01 15:03:50.240518933 +0000 UTC m=+0.204947495 container init ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_diffie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  1 10:03:50 np0005604375 podman[209320]: 2026-02-01 15:03:50.248746165 +0000 UTC m=+0.213174737 container start ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  1 10:03:50 np0005604375 podman[209320]: 2026-02-01 15:03:50.252705246 +0000 UTC m=+0.217133898 container attach ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_diffie, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:03:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:03:50 np0005604375 python3.9[209450]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:50 np0005604375 pedantic_diffie[209374]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:03:50 np0005604375 pedantic_diffie[209374]: --> All data devices are unavailable
Feb  1 10:03:50 np0005604375 systemd[1]: libpod-ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b.scope: Deactivated successfully.
Feb  1 10:03:50 np0005604375 podman[209320]: 2026-02-01 15:03:50.763888274 +0000 UTC m=+0.728316816 container died ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_diffie, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Feb  1 10:03:50 np0005604375 systemd[1]: var-lib-containers-storage-overlay-903616566b3c763b4b58c8f385acebc8aa2f1d07c5abcf27b7acd02a290fedba-merged.mount: Deactivated successfully.
Feb  1 10:03:50 np0005604375 podman[209320]: 2026-02-01 15:03:50.812263254 +0000 UTC m=+0.776691826 container remove ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_diffie, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  1 10:03:50 np0005604375 systemd[1]: libpod-conmon-ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b.scope: Deactivated successfully.
Feb  1 10:03:51 np0005604375 podman[209690]: 2026-02-01 15:03:51.349776622 +0000 UTC m=+0.064590417 container create 86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:03:51 np0005604375 systemd[1]: Started libpod-conmon-86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2.scope.
Feb  1 10:03:51 np0005604375 podman[209690]: 2026-02-01 15:03:51.319877202 +0000 UTC m=+0.034690997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:03:51 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:03:51 np0005604375 podman[209690]: 2026-02-01 15:03:51.446667708 +0000 UTC m=+0.161481563 container init 86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_goldstine, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  1 10:03:51 np0005604375 python3.9[209688]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:51 np0005604375 podman[209690]: 2026-02-01 15:03:51.45743034 +0000 UTC m=+0.172244135 container start 86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:03:51 np0005604375 podman[209690]: 2026-02-01 15:03:51.462349039 +0000 UTC m=+0.177162834 container attach 86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  1 10:03:51 np0005604375 zealous_goldstine[209707]: 167 167
Feb  1 10:03:51 np0005604375 systemd[1]: libpod-86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2.scope: Deactivated successfully.
Feb  1 10:03:51 np0005604375 podman[209690]: 2026-02-01 15:03:51.463740708 +0000 UTC m=+0.178554503 container died 86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:03:51 np0005604375 systemd[1]: var-lib-containers-storage-overlay-0897498f0f2da865418ddf8869f3a47577d417e0c11ff9b628d30937da3a39b2-merged.mount: Deactivated successfully.
Feb  1 10:03:51 np0005604375 podman[209690]: 2026-02-01 15:03:51.508119976 +0000 UTC m=+0.222933731 container remove 86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_goldstine, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:03:51 np0005604375 systemd[1]: libpod-conmon-86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2.scope: Deactivated successfully.
Feb  1 10:03:51 np0005604375 podman[209760]: 2026-02-01 15:03:51.670359889 +0000 UTC m=+0.042723082 container create a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:03:51 np0005604375 systemd[1]: Started libpod-conmon-a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716.scope.
Feb  1 10:03:51 np0005604375 podman[209760]: 2026-02-01 15:03:51.655204283 +0000 UTC m=+0.027567496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:03:51 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:03:51 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d70dcf30e59e2799990e1842d50c067252e0c038a5289281e08770e6bb2505/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:03:51 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d70dcf30e59e2799990e1842d50c067252e0c038a5289281e08770e6bb2505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:03:51 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d70dcf30e59e2799990e1842d50c067252e0c038a5289281e08770e6bb2505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:03:51 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d70dcf30e59e2799990e1842d50c067252e0c038a5289281e08770e6bb2505/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:03:51 np0005604375 podman[209760]: 2026-02-01 15:03:51.789048207 +0000 UTC m=+0.161411460 container init a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:03:51 np0005604375 podman[209760]: 2026-02-01 15:03:51.801337373 +0000 UTC m=+0.173700606 container start a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  1 10:03:51 np0005604375 podman[209760]: 2026-02-01 15:03:51.805811819 +0000 UTC m=+0.178175062 container attach a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:03:51 np0005604375 python3.9[209827]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]: {
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:    "0": [
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:        {
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "devices": [
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "/dev/loop3"
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            ],
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_name": "ceph_lv0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_size": "21470642176",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "name": "ceph_lv0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "tags": {
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.cluster_name": "ceph",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.crush_device_class": "",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.encrypted": "0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.objectstore": "bluestore",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.osd_id": "0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.type": "block",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.vdo": "0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.with_tpm": "0"
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            },
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "type": "block",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "vg_name": "ceph_vg0"
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:        }
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:    ],
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:    "1": [
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:        {
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "devices": [
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "/dev/loop4"
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            ],
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_name": "ceph_lv1",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_size": "21470642176",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "name": "ceph_lv1",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "tags": {
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.cluster_name": "ceph",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.crush_device_class": "",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.encrypted": "0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.objectstore": "bluestore",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.osd_id": "1",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.type": "block",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.vdo": "0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.with_tpm": "0"
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            },
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "type": "block",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "vg_name": "ceph_vg1"
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:        }
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:    ],
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:    "2": [
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:        {
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "devices": [
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "/dev/loop5"
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            ],
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_name": "ceph_lv2",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_size": "21470642176",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "name": "ceph_lv2",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "tags": {
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.cluster_name": "ceph",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.crush_device_class": "",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.encrypted": "0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.objectstore": "bluestore",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.osd_id": "2",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.type": "block",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.vdo": "0",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:                "ceph.with_tpm": "0"
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            },
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "type": "block",
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:            "vg_name": "ceph_vg2"
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:        }
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]:    ]
Feb  1 10:03:52 np0005604375 sleepy_wu[209822]: }
Feb  1 10:03:52 np0005604375 systemd[1]: libpod-a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716.scope: Deactivated successfully.
Feb  1 10:03:52 np0005604375 podman[209760]: 2026-02-01 15:03:52.125319415 +0000 UTC m=+0.497682618 container died a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:03:52 np0005604375 systemd[1]: var-lib-containers-storage-overlay-12d70dcf30e59e2799990e1842d50c067252e0c038a5289281e08770e6bb2505-merged.mount: Deactivated successfully.
Feb  1 10:03:52 np0005604375 podman[209760]: 2026-02-01 15:03:52.159414324 +0000 UTC m=+0.531777517 container remove a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:03:52 np0005604375 systemd[1]: libpod-conmon-a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716.scope: Deactivated successfully.
Feb  1 10:03:52 np0005604375 podman[210056]: 2026-02-01 15:03:52.604409569 +0000 UTC m=+0.062701244 container create 85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  1 10:03:52 np0005604375 systemd[1]: Started libpod-conmon-85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853.scope.
Feb  1 10:03:52 np0005604375 podman[210056]: 2026-02-01 15:03:52.580324132 +0000 UTC m=+0.038615877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:03:52 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:03:52 np0005604375 podman[210056]: 2026-02-01 15:03:52.694952596 +0000 UTC m=+0.153244281 container init 85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:03:52 np0005604375 python3.9[210055]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:52 np0005604375 podman[210056]: 2026-02-01 15:03:52.702888989 +0000 UTC m=+0.161180664 container start 85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:03:52 np0005604375 podman[210056]: 2026-02-01 15:03:52.706395158 +0000 UTC m=+0.164686853 container attach 85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  1 10:03:52 np0005604375 mystifying_boyd[210072]: 167 167
Feb  1 10:03:52 np0005604375 systemd[1]: libpod-85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853.scope: Deactivated successfully.
Feb  1 10:03:52 np0005604375 podman[210056]: 2026-02-01 15:03:52.708545278 +0000 UTC m=+0.166836983 container died 85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  1 10:03:52 np0005604375 systemd[1]: var-lib-containers-storage-overlay-5e199c1ece1155068e44a6720d012361457eb9f1e0923911eec76dc416193485-merged.mount: Deactivated successfully.
Feb  1 10:03:52 np0005604375 podman[210056]: 2026-02-01 15:03:52.754958974 +0000 UTC m=+0.213250679 container remove 85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  1 10:03:52 np0005604375 systemd[1]: libpod-conmon-85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853.scope: Deactivated successfully.
Feb  1 10:03:52 np0005604375 podman[210121]: 2026-02-01 15:03:52.913670718 +0000 UTC m=+0.055918624 container create ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:03:52 np0005604375 systemd[1]: Started libpod-conmon-ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd.scope.
Feb  1 10:03:52 np0005604375 podman[210121]: 2026-02-01 15:03:52.884369323 +0000 UTC m=+0.026617279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:03:52 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:03:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d86ca1292d037d4fc0e129c07a528a96b217061bfdd6124957fada95d985d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:03:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d86ca1292d037d4fc0e129c07a528a96b217061bfdd6124957fada95d985d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:03:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d86ca1292d037d4fc0e129c07a528a96b217061bfdd6124957fada95d985d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:03:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d86ca1292d037d4fc0e129c07a528a96b217061bfdd6124957fada95d985d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:03:53 np0005604375 podman[210121]: 2026-02-01 15:03:53.010899042 +0000 UTC m=+0.153146938 container init ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:03:53 np0005604375 podman[210121]: 2026-02-01 15:03:53.021752097 +0000 UTC m=+0.163999973 container start ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  1 10:03:53 np0005604375 podman[210121]: 2026-02-01 15:03:53.025334488 +0000 UTC m=+0.167582374 container attach ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bouman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  1 10:03:53 np0005604375 python3.9[210195]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.n3lznqwz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:53 np0005604375 lvm[210421]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:03:53 np0005604375 lvm[210421]: VG ceph_vg0 finished
Feb  1 10:03:53 np0005604375 lvm[210422]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:03:53 np0005604375 lvm[210422]: VG ceph_vg1 finished
Feb  1 10:03:53 np0005604375 lvm[210424]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:03:53 np0005604375 lvm[210424]: VG ceph_vg2 finished
Feb  1 10:03:53 np0005604375 lvm[210425]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:03:53 np0005604375 lvm[210425]: VG ceph_vg2 finished
Feb  1 10:03:53 np0005604375 nervous_bouman[210162]: {}
Feb  1 10:03:53 np0005604375 systemd[1]: libpod-ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd.scope: Deactivated successfully.
Feb  1 10:03:53 np0005604375 podman[210121]: 2026-02-01 15:03:53.845097495 +0000 UTC m=+0.987345361 container died ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  1 10:03:53 np0005604375 systemd[1]: libpod-ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd.scope: Consumed 1.211s CPU time.
Feb  1 10:03:53 np0005604375 python3.9[210418]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:53 np0005604375 systemd[1]: var-lib-containers-storage-overlay-23d86ca1292d037d4fc0e129c07a528a96b217061bfdd6124957fada95d985d3-merged.mount: Deactivated successfully.
Feb  1 10:03:53 np0005604375 podman[210121]: 2026-02-01 15:03:53.881324634 +0000 UTC m=+1.023572500 container remove ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bouman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:03:53 np0005604375 systemd[1]: libpod-conmon-ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd.scope: Deactivated successfully.
Feb  1 10:03:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:03:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:03:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:03:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:03:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:54 np0005604375 python3.9[210542]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:03:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:03:55 np0005604375 python3.9[210694]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:03:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:03:55 np0005604375 python3[210847]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  1 10:03:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:56 np0005604375 python3.9[210999]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:57 np0005604375 python3.9[211077]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:57 np0005604375 python3.9[211229]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:03:58 np0005604375 python3.9[211354]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958237.3567708-1280-63996755143542/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:03:59 np0005604375 python3.9[211506]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:03:59 np0005604375 python3.9[211584]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:00 np0005604375 python3.9[211736]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:04:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:04:00 np0005604375 python3.9[211814]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.121806) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958241121841, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2046, "num_deletes": 251, "total_data_size": 3579034, "memory_usage": 3629568, "flush_reason": "Manual Compaction"}
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958241138884, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3491766, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9698, "largest_seqno": 11743, "table_properties": {"data_size": 3482469, "index_size": 5919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17926, "raw_average_key_size": 19, "raw_value_size": 3464030, "raw_average_value_size": 3765, "num_data_blocks": 269, "num_entries": 920, "num_filter_entries": 920, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958008, "oldest_key_time": 1769958008, "file_creation_time": 1769958241, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17129 microseconds, and 4533 cpu microseconds.
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.138936) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3491766 bytes OK
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.138957) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.140596) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.140613) EVENT_LOG_v1 {"time_micros": 1769958241140607, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.140633) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3570485, prev total WAL file size 3570485, number of live WAL files 2.
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.141342) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3409KB)], [26(6003KB)]
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958241141378, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9639399, "oldest_snapshot_seqno": -1}
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3701 keys, 8050904 bytes, temperature: kUnknown
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958241169624, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8050904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8022401, "index_size": 18153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88843, "raw_average_key_size": 24, "raw_value_size": 7951828, "raw_average_value_size": 2148, "num_data_blocks": 787, "num_entries": 3701, "num_filter_entries": 3701, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958241, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.169823) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8050904 bytes
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.170995) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 340.6 rd, 284.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4215, records dropped: 514 output_compression: NoCompression
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.171015) EVENT_LOG_v1 {"time_micros": 1769958241171004, "job": 10, "event": "compaction_finished", "compaction_time_micros": 28305, "compaction_time_cpu_micros": 15326, "output_level": 6, "num_output_files": 1, "total_output_size": 8050904, "num_input_records": 4215, "num_output_records": 3701, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958241171413, "job": 10, "event": "table_file_deletion", "file_number": 28}
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958241171895, "job": 10, "event": "table_file_deletion", "file_number": 26}
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.141261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.171955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.171960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.171961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.171963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:04:01 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.171964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:04:01 np0005604375 python3.9[211966]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:04:01 np0005604375 python3.9[212091]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958240.9854302-1319-183483607404301/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:02 np0005604375 python3.9[212243]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:03 np0005604375 python3.9[212395]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:04:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:04 np0005604375 python3.9[212550]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:04 np0005604375 python3.9[212702]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:04:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:04:05 np0005604375 python3.9[212855]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:04:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:06 np0005604375 python3.9[213009]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:04:06 np0005604375 python3.9[213164]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:07 np0005604375 python3.9[213316]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:04:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:04:07.795 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:04:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:04:07.796 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:04:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:04:07.797 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:04:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:08 np0005604375 python3.9[213439]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958247.203211-1391-116251663256016/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:08 np0005604375 python3.9[213591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:04:09 np0005604375 python3.9[213714]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958248.432451-1406-156926511530507/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:10 np0005604375 python3.9[213866]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:04:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:10 np0005604375 podman[213961]: 2026-02-01 15:04:10.387016129 +0000 UTC m=+0.051159290 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  1 10:04:10 np0005604375 podman[213962]: 2026-02-01 15:04:10.435999366 +0000 UTC m=+0.100029033 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:04:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:04:10 np0005604375 python3.9[214022]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958249.5941124-1421-36466559955629/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:11 np0005604375 python3.9[214185]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:04:11 np0005604375 systemd[1]: Reloading.
Feb  1 10:04:11 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:04:11 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:04:11 np0005604375 systemd[1]: Reached target edpm_libvirt.target.
Feb  1 10:04:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:12 np0005604375 python3.9[214376]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb  1 10:04:12 np0005604375 systemd[1]: Reloading.
Feb  1 10:04:12 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:04:12 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:04:12 np0005604375 systemd[1]: Reloading.
Feb  1 10:04:12 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:04:12 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:04:13 np0005604375 systemd[1]: session-48.scope: Deactivated successfully.
Feb  1 10:04:13 np0005604375 systemd[1]: session-48.scope: Consumed 2min 56.514s CPU time.
Feb  1 10:04:13 np0005604375 systemd-logind[786]: Session 48 logged out. Waiting for processes to exit.
Feb  1 10:04:13 np0005604375 systemd-logind[786]: Removed session 48.
Feb  1 10:04:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:04:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:04:17
Feb  1 10:04:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:04:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:04:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'volumes', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.control']
Feb  1 10:04:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:04:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:04:18 np0005604375 systemd-logind[786]: New session 49 of user zuul.
Feb  1 10:04:18 np0005604375 systemd[1]: Started Session 49 of User zuul.
Feb  1 10:04:19 np0005604375 python3.9[214624]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 10:04:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:04:20 np0005604375 python3.9[214778]: ansible-ansible.builtin.service_facts Invoked
Feb  1 10:04:21 np0005604375 network[214795]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  1 10:04:21 np0005604375 network[214796]: 'network-scripts' will be removed from distribution in near future.
Feb  1 10:04:21 np0005604375 network[214797]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  1 10:04:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:25 np0005604375 python3.9[215069]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  1 10:04:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:04:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:26 np0005604375 python3.9[215153]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:04:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:04:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:04:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:32 np0005604375 python3.9[215306]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:04:33 np0005604375 python3.9[215458]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:04:34 np0005604375 python3.9[215611]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:04:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:34 np0005604375 python3.9[215763]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:04:35 np0005604375 python3.9[215916]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:04:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:04:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:36 np0005604375 python3.9[216039]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958274.9667573-90-119935617272593/.source.iscsi _original_basename=._y78_1le follow=False checksum=3633b0be9514cf75260a947b044d980e360549a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:37 np0005604375 python3.9[216191]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:37 np0005604375 python3.9[216343]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:39 np0005604375 python3.9[216495]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:04:39 np0005604375 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb  1 10:04:39 np0005604375 python3.9[216651]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:04:39 np0005604375 systemd[1]: Reloading.
Feb  1 10:04:39 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:04:39 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:04:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:40 np0005604375 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb  1 10:04:40 np0005604375 systemd[1]: Starting Open-iSCSI...
Feb  1 10:04:40 np0005604375 kernel: Loading iSCSI transport class v2.0-870.
Feb  1 10:04:40 np0005604375 systemd[1]: Started Open-iSCSI.
Feb  1 10:04:40 np0005604375 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Feb  1 10:04:40 np0005604375 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Feb  1 10:04:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:04:40 np0005604375 podman[216825]: 2026-02-01 15:04:40.831618167 +0000 UTC m=+0.074515837 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  1 10:04:40 np0005604375 podman[216826]: 2026-02-01 15:04:40.855139049 +0000 UTC m=+0.097895205 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  1 10:04:40 np0005604375 python3.9[216872]: ansible-ansible.builtin.service_facts Invoked
Feb  1 10:04:41 np0005604375 network[216911]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  1 10:04:41 np0005604375 network[216912]: 'network-scripts' will be removed from distribution in near future.
Feb  1 10:04:41 np0005604375 network[216913]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  1 10:04:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:44 np0005604375 python3.9[217185]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 10:04:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:04:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:46 np0005604375 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  1 10:04:46 np0005604375 systemd[1]: Starting man-db-cache-update.service...
Feb  1 10:04:46 np0005604375 systemd[1]: Reloading.
Feb  1 10:04:46 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:04:46 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:04:46 np0005604375 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  1 10:04:46 np0005604375 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  1 10:04:46 np0005604375 systemd[1]: Finished man-db-cache-update.service.
Feb  1 10:04:46 np0005604375 systemd[1]: run-r13ccf82e5d1445de864a6ad7bdbb300f.service: Deactivated successfully.
Feb  1 10:04:47 np0005604375 python3.9[217503]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb  1 10:04:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:04:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:04:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:04:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:04:48 np0005604375 python3.9[217655]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb  1 10:04:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:04:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:04:49 np0005604375 python3.9[217811]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:04:50 np0005604375 python3.9[217934]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958288.9089894-178-195300655979873/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:04:50 np0005604375 python3.9[218086]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:51 np0005604375 python3.9[218238]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 10:04:52 np0005604375 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  1 10:04:52 np0005604375 systemd[1]: Stopped Load Kernel Modules.
Feb  1 10:04:52 np0005604375 systemd[1]: Stopping Load Kernel Modules...
Feb  1 10:04:52 np0005604375 systemd[1]: Starting Load Kernel Modules...
Feb  1 10:04:52 np0005604375 systemd[1]: Finished Load Kernel Modules.
Feb  1 10:04:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:52 np0005604375 python3.9[218394]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:04:53 np0005604375 python3.9[218547]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:04:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:54 np0005604375 python3.9[218749]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:04:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:04:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:04:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:04:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:04:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:04:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:04:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:04:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:04:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:04:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:04:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:04:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:04:55 np0005604375 python3.9[218953]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958293.9196787-229-18455951116993/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:55 np0005604375 podman[218965]: 2026-02-01 15:04:55.081059481 +0000 UTC m=+0.046561955 container create c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilbur, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:04:55 np0005604375 systemd[1]: Started libpod-conmon-c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a.scope.
Feb  1 10:04:55 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:04:55 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:04:55 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:04:55 np0005604375 podman[218965]: 2026-02-01 15:04:55.063213271 +0000 UTC m=+0.028715785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:04:55 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:04:55 np0005604375 podman[218965]: 2026-02-01 15:04:55.17209107 +0000 UTC m=+0.137593564 container init c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilbur, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:04:55 np0005604375 podman[218965]: 2026-02-01 15:04:55.177426729 +0000 UTC m=+0.142929213 container start c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  1 10:04:55 np0005604375 podman[218965]: 2026-02-01 15:04:55.180829904 +0000 UTC m=+0.146332418 container attach c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:04:55 np0005604375 busy_wilbur[219005]: 167 167
Feb  1 10:04:55 np0005604375 systemd[1]: libpod-c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a.scope: Deactivated successfully.
Feb  1 10:04:55 np0005604375 conmon[219005]: conmon c5324e1227d59fadde18 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a.scope/container/memory.events
Feb  1 10:04:55 np0005604375 podman[218965]: 2026-02-01 15:04:55.184899618 +0000 UTC m=+0.150402102 container died c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 10:04:55 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f6ebe5b7524288676b71059a11d2a5c920446b9653977d8e15c6ec481e8abacc-merged.mount: Deactivated successfully.
Feb  1 10:04:55 np0005604375 podman[218965]: 2026-02-01 15:04:55.220384292 +0000 UTC m=+0.185886766 container remove c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilbur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:04:55 np0005604375 systemd[1]: libpod-conmon-c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a.scope: Deactivated successfully.
Feb  1 10:04:55 np0005604375 podman[219082]: 2026-02-01 15:04:55.375246208 +0000 UTC m=+0.048639163 container create f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:04:55 np0005604375 systemd[1]: Started libpod-conmon-f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33.scope.
Feb  1 10:04:55 np0005604375 podman[219082]: 2026-02-01 15:04:55.356127653 +0000 UTC m=+0.029520658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:04:55 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:04:55 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa20a8e704a7623b0b6c3b5d85f4fae2a87489b2f8b64dea3af549cce118fce4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:04:55 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa20a8e704a7623b0b6c3b5d85f4fae2a87489b2f8b64dea3af549cce118fce4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:04:55 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa20a8e704a7623b0b6c3b5d85f4fae2a87489b2f8b64dea3af549cce118fce4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:04:55 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa20a8e704a7623b0b6c3b5d85f4fae2a87489b2f8b64dea3af549cce118fce4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:04:55 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa20a8e704a7623b0b6c3b5d85f4fae2a87489b2f8b64dea3af549cce118fce4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:04:55 np0005604375 podman[219082]: 2026-02-01 15:04:55.4885274 +0000 UTC m=+0.161920395 container init f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  1 10:04:55 np0005604375 podman[219082]: 2026-02-01 15:04:55.500800074 +0000 UTC m=+0.174193039 container start f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  1 10:04:55 np0005604375 podman[219082]: 2026-02-01 15:04:55.505559337 +0000 UTC m=+0.178952332 container attach f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:04:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:04:55 np0005604375 python3.9[219178]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:04:55 np0005604375 nostalgic_williams[219144]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:04:55 np0005604375 nostalgic_williams[219144]: --> All data devices are unavailable
Feb  1 10:04:55 np0005604375 systemd[1]: libpod-f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33.scope: Deactivated successfully.
Feb  1 10:04:55 np0005604375 podman[219082]: 2026-02-01 15:04:55.896395621 +0000 UTC m=+0.569788586 container died f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:04:55 np0005604375 systemd[1]: var-lib-containers-storage-overlay-fa20a8e704a7623b0b6c3b5d85f4fae2a87489b2f8b64dea3af549cce118fce4-merged.mount: Deactivated successfully.
Feb  1 10:04:55 np0005604375 podman[219082]: 2026-02-01 15:04:55.937934974 +0000 UTC m=+0.611327959 container remove f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  1 10:04:55 np0005604375 systemd[1]: libpod-conmon-f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33.scope: Deactivated successfully.
Feb  1 10:04:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:56 np0005604375 podman[219422]: 2026-02-01 15:04:56.307440321 +0000 UTC m=+0.042393658 container create 5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_liskov, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  1 10:04:56 np0005604375 systemd[1]: Started libpod-conmon-5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0.scope.
Feb  1 10:04:56 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:04:56 np0005604375 python3.9[219408]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:56 np0005604375 podman[219422]: 2026-02-01 15:04:56.374426646 +0000 UTC m=+0.109379983 container init 5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_liskov, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:04:56 np0005604375 podman[219422]: 2026-02-01 15:04:56.381500785 +0000 UTC m=+0.116454142 container start 5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  1 10:04:56 np0005604375 hungry_liskov[219439]: 167 167
Feb  1 10:04:56 np0005604375 podman[219422]: 2026-02-01 15:04:56.289156549 +0000 UTC m=+0.024109926 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:04:56 np0005604375 systemd[1]: libpod-5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0.scope: Deactivated successfully.
Feb  1 10:04:56 np0005604375 podman[219422]: 2026-02-01 15:04:56.385502997 +0000 UTC m=+0.120456364 container attach 5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_liskov, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:04:56 np0005604375 podman[219422]: 2026-02-01 15:04:56.386067562 +0000 UTC m=+0.121020909 container died 5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_liskov, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  1 10:04:56 np0005604375 systemd[1]: var-lib-containers-storage-overlay-4cb7e329b08a989fd97b25919b17eda042e793033435d3e90d17c0217fe03374-merged.mount: Deactivated successfully.
Feb  1 10:04:56 np0005604375 podman[219422]: 2026-02-01 15:04:56.42456675 +0000 UTC m=+0.159520087 container remove 5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:04:56 np0005604375 systemd[1]: libpod-conmon-5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0.scope: Deactivated successfully.
Feb  1 10:04:56 np0005604375 podman[219485]: 2026-02-01 15:04:56.569047866 +0000 UTC m=+0.045159696 container create e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:04:56 np0005604375 systemd[1]: Started libpod-conmon-e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03.scope.
Feb  1 10:04:56 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:04:56 np0005604375 podman[219485]: 2026-02-01 15:04:56.545061664 +0000 UTC m=+0.021173504 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:04:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e31a4066a9e7d413efeafaba4cfc25c0e434fe40829376d2d31829cf2d65dd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:04:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e31a4066a9e7d413efeafaba4cfc25c0e434fe40829376d2d31829cf2d65dd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:04:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e31a4066a9e7d413efeafaba4cfc25c0e434fe40829376d2d31829cf2d65dd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:04:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e31a4066a9e7d413efeafaba4cfc25c0e434fe40829376d2d31829cf2d65dd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:04:56 np0005604375 podman[219485]: 2026-02-01 15:04:56.655720343 +0000 UTC m=+0.131832153 container init e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  1 10:04:56 np0005604375 podman[219485]: 2026-02-01 15:04:56.66384379 +0000 UTC m=+0.139955590 container start e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  1 10:04:56 np0005604375 podman[219485]: 2026-02-01 15:04:56.667257786 +0000 UTC m=+0.143369616 container attach e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]: {
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:    "0": [
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:        {
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "devices": [
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "/dev/loop3"
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            ],
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_name": "ceph_lv0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_size": "21470642176",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "name": "ceph_lv0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "tags": {
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.cluster_name": "ceph",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.crush_device_class": "",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.encrypted": "0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.objectstore": "bluestore",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.osd_id": "0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.type": "block",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.vdo": "0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.with_tpm": "0"
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            },
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "type": "block",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "vg_name": "ceph_vg0"
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:        }
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:    ],
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:    "1": [
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:        {
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "devices": [
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "/dev/loop4"
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            ],
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_name": "ceph_lv1",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_size": "21470642176",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "name": "ceph_lv1",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "tags": {
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.cluster_name": "ceph",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.crush_device_class": "",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.encrypted": "0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.objectstore": "bluestore",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.osd_id": "1",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.type": "block",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.vdo": "0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.with_tpm": "0"
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            },
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "type": "block",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "vg_name": "ceph_vg1"
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:        }
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:    ],
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:    "2": [
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:        {
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "devices": [
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "/dev/loop5"
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            ],
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_name": "ceph_lv2",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_size": "21470642176",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "name": "ceph_lv2",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "tags": {
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.cluster_name": "ceph",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.crush_device_class": "",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.encrypted": "0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.objectstore": "bluestore",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.osd_id": "2",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.type": "block",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.vdo": "0",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:                "ceph.with_tpm": "0"
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            },
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "type": "block",
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:            "vg_name": "ceph_vg2"
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:        }
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]:    ]
Feb  1 10:04:56 np0005604375 zealous_sammet[219532]: }
Feb  1 10:04:56 np0005604375 systemd[1]: libpod-e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03.scope: Deactivated successfully.
Feb  1 10:04:56 np0005604375 podman[219485]: 2026-02-01 15:04:56.949665413 +0000 UTC m=+0.425777213 container died e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  1 10:04:56 np0005604375 systemd[1]: var-lib-containers-storage-overlay-8e31a4066a9e7d413efeafaba4cfc25c0e434fe40829376d2d31829cf2d65dd7-merged.mount: Deactivated successfully.
Feb  1 10:04:56 np0005604375 podman[219485]: 2026-02-01 15:04:56.991803603 +0000 UTC m=+0.467915393 container remove e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 10:04:57 np0005604375 systemd[1]: libpod-conmon-e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03.scope: Deactivated successfully.
Feb  1 10:04:57 np0005604375 python3.9[219699]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:57 np0005604375 podman[219714]: 2026-02-01 15:04:57.388610464 +0000 UTC m=+0.040545547 container create 55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb  1 10:04:57 np0005604375 systemd[1]: Started libpod-conmon-55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917.scope.
Feb  1 10:04:57 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:04:57 np0005604375 podman[219714]: 2026-02-01 15:04:57.464855889 +0000 UTC m=+0.116791002 container init 55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Feb  1 10:04:57 np0005604375 podman[219714]: 2026-02-01 15:04:57.372906834 +0000 UTC m=+0.024841957 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:04:57 np0005604375 podman[219714]: 2026-02-01 15:04:57.470763054 +0000 UTC m=+0.122698137 container start 55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  1 10:04:57 np0005604375 podman[219714]: 2026-02-01 15:04:57.474472438 +0000 UTC m=+0.126407621 container attach 55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bell, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  1 10:04:57 np0005604375 cranky_bell[219754]: 167 167
Feb  1 10:04:57 np0005604375 systemd[1]: libpod-55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917.scope: Deactivated successfully.
Feb  1 10:04:57 np0005604375 conmon[219754]: conmon 55ac16ce6946dfa0af4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917.scope/container/memory.events
Feb  1 10:04:57 np0005604375 podman[219780]: 2026-02-01 15:04:57.518646645 +0000 UTC m=+0.028145359 container died 55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb  1 10:04:57 np0005604375 systemd[1]: var-lib-containers-storage-overlay-d72881461fc54d6b8844138f160a7e75fffb7b1c2d7892aea9ed0e8bc3367d24-merged.mount: Deactivated successfully.
Feb  1 10:04:57 np0005604375 podman[219780]: 2026-02-01 15:04:57.560450215 +0000 UTC m=+0.069948929 container remove 55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:04:57 np0005604375 systemd[1]: libpod-conmon-55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917.scope: Deactivated successfully.
Feb  1 10:04:57 np0005604375 podman[219857]: 2026-02-01 15:04:57.729219851 +0000 UTC m=+0.041888754 container create bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_babbage, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  1 10:04:57 np0005604375 systemd[1]: Started libpod-conmon-bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43.scope.
Feb  1 10:04:57 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:04:57 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55af17419364959b2d60b08844685a9f8db27ec8b37575a7f1e56d48bced133e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:04:57 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55af17419364959b2d60b08844685a9f8db27ec8b37575a7f1e56d48bced133e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:04:57 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55af17419364959b2d60b08844685a9f8db27ec8b37575a7f1e56d48bced133e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:04:57 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55af17419364959b2d60b08844685a9f8db27ec8b37575a7f1e56d48bced133e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:04:57 np0005604375 podman[219857]: 2026-02-01 15:04:57.803182812 +0000 UTC m=+0.115851755 container init bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  1 10:04:57 np0005604375 podman[219857]: 2026-02-01 15:04:57.711938427 +0000 UTC m=+0.024607330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:04:57 np0005604375 podman[219857]: 2026-02-01 15:04:57.812339569 +0000 UTC m=+0.125008472 container start bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_babbage, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:04:57 np0005604375 podman[219857]: 2026-02-01 15:04:57.818378928 +0000 UTC m=+0.131047971 container attach bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_babbage, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Feb  1 10:04:57 np0005604375 python3.9[219927]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:04:58 np0005604375 lvm[220151]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:04:58 np0005604375 lvm[220151]: VG ceph_vg0 finished
Feb  1 10:04:58 np0005604375 lvm[220155]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:04:58 np0005604375 lvm[220155]: VG ceph_vg1 finished
Feb  1 10:04:58 np0005604375 lvm[220159]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:04:58 np0005604375 lvm[220160]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:04:58 np0005604375 lvm[220160]: VG ceph_vg0 finished
Feb  1 10:04:58 np0005604375 lvm[220159]: VG ceph_vg2 finished
Feb  1 10:04:58 np0005604375 jolly_babbage[219910]: {}
Feb  1 10:04:58 np0005604375 systemd[1]: libpod-bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43.scope: Deactivated successfully.
Feb  1 10:04:58 np0005604375 systemd[1]: libpod-bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43.scope: Consumed 1.125s CPU time.
Feb  1 10:04:58 np0005604375 podman[219857]: 2026-02-01 15:04:58.580804287 +0000 UTC m=+0.893473190 container died bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  1 10:04:58 np0005604375 systemd[1]: var-lib-containers-storage-overlay-55af17419364959b2d60b08844685a9f8db27ec8b37575a7f1e56d48bced133e-merged.mount: Deactivated successfully.
Feb  1 10:04:58 np0005604375 python3.9[220157]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:58 np0005604375 podman[219857]: 2026-02-01 15:04:58.627284398 +0000 UTC m=+0.939953321 container remove bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:04:58 np0005604375 systemd[1]: libpod-conmon-bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43.scope: Deactivated successfully.
Feb  1 10:04:58 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:04:58 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:04:58 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:04:58 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:04:59 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:04:59 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:04:59 np0005604375 python3.9[220351]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:04:59 np0005604375 python3.9[220503]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:05:00 np0005604375 python3.9[220655]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:01 np0005604375 python3.9[220807]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:05:01 np0005604375 python3.9[220961]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:05:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:02 np0005604375 python3.9[221114]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:05:02 np0005604375 systemd[1]: Listening on multipathd control socket.
Feb  1 10:05:03 np0005604375 python3.9[221270]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:05:03 np0005604375 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Feb  1 10:05:03 np0005604375 udevadm[221275]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Feb  1 10:05:03 np0005604375 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Feb  1 10:05:03 np0005604375 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb  1 10:05:03 np0005604375 multipathd[221279]: --------start up--------
Feb  1 10:05:03 np0005604375 multipathd[221279]: read /etc/multipath.conf
Feb  1 10:05:03 np0005604375 multipathd[221279]: path checkers start up
Feb  1 10:05:03 np0005604375 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb  1 10:05:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:04 np0005604375 python3.9[221438]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb  1 10:05:05 np0005604375 python3.9[221590]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb  1 10:05:05 np0005604375 kernel: Key type psk registered
Feb  1 10:05:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:05:05 np0005604375 python3.9[221751]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:05:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:06 np0005604375 python3.9[221874]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958305.5295756-359-164621091947738/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:07 np0005604375 python3.9[222026]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:05:07.797 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:05:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:05:07.798 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:05:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:05:07.798 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:05:07 np0005604375 python3.9[222178]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 10:05:07 np0005604375 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  1 10:05:07 np0005604375 systemd[1]: Stopped Load Kernel Modules.
Feb  1 10:05:07 np0005604375 systemd[1]: Stopping Load Kernel Modules...
Feb  1 10:05:07 np0005604375 systemd[1]: Starting Load Kernel Modules...
Feb  1 10:05:07 np0005604375 systemd[1]: Finished Load Kernel Modules.
Feb  1 10:05:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:08 np0005604375 python3.9[222334]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  1 10:05:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:05:10 np0005604375 podman[222339]: 2026-02-01 15:05:10.990360676 +0000 UTC m=+0.074991671 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb  1 10:05:11 np0005604375 podman[222340]: 2026-02-01 15:05:11.007167957 +0000 UTC m=+0.086414641 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  1 10:05:11 np0005604375 systemd[1]: Reloading.
Feb  1 10:05:11 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:05:11 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:05:11 np0005604375 systemd[1]: Reloading.
Feb  1 10:05:11 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:05:11 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:05:11 np0005604375 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Feb  1 10:05:11 np0005604375 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb  1 10:05:11 np0005604375 lvm[222490]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:05:11 np0005604375 lvm[222490]: VG ceph_vg0 finished
Feb  1 10:05:11 np0005604375 lvm[222491]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:05:11 np0005604375 lvm[222491]: VG ceph_vg2 finished
Feb  1 10:05:11 np0005604375 lvm[222492]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:05:11 np0005604375 lvm[222492]: VG ceph_vg1 finished
Feb  1 10:05:11 np0005604375 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  1 10:05:11 np0005604375 systemd[1]: Starting man-db-cache-update.service...
Feb  1 10:05:11 np0005604375 systemd[1]: Reloading.
Feb  1 10:05:12 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:05:12 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:05:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:12 np0005604375 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  1 10:05:12 np0005604375 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  1 10:05:12 np0005604375 systemd[1]: Finished man-db-cache-update.service.
Feb  1 10:05:12 np0005604375 systemd[1]: run-r233f14e7165b4044adbfb0376f2b3273.service: Deactivated successfully.
Feb  1 10:05:13 np0005604375 python3.9[223848]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 10:05:13 np0005604375 iscsid[216691]: iscsid shutting down.
Feb  1 10:05:13 np0005604375 systemd[1]: Stopping Open-iSCSI...
Feb  1 10:05:13 np0005604375 systemd[1]: iscsid.service: Deactivated successfully.
Feb  1 10:05:13 np0005604375 systemd[1]: Stopped Open-iSCSI.
Feb  1 10:05:13 np0005604375 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb  1 10:05:13 np0005604375 systemd[1]: Starting Open-iSCSI...
Feb  1 10:05:13 np0005604375 systemd[1]: Started Open-iSCSI.
Feb  1 10:05:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:14 np0005604375 python3.9[224004]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 10:05:14 np0005604375 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Feb  1 10:05:14 np0005604375 multipathd[221279]: exit (signal)
Feb  1 10:05:14 np0005604375 multipathd[221279]: --------shut down-------
Feb  1 10:05:14 np0005604375 systemd[1]: multipathd.service: Deactivated successfully.
Feb  1 10:05:14 np0005604375 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Feb  1 10:05:14 np0005604375 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb  1 10:05:14 np0005604375 multipathd[224011]: --------start up--------
Feb  1 10:05:14 np0005604375 multipathd[224011]: read /etc/multipath.conf
Feb  1 10:05:14 np0005604375 multipathd[224011]: path checkers start up
Feb  1 10:05:14 np0005604375 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb  1 10:05:15 np0005604375 python3.9[224168]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  1 10:05:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:05:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:16 np0005604375 python3.9[224324]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:17 np0005604375 python3.9[224476]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  1 10:05:17 np0005604375 systemd[1]: Reloading.
Feb  1 10:05:17 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:05:17 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:05:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:05:17
Feb  1 10:05:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:05:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:05:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', '.rgw.root', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'vms']
Feb  1 10:05:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:18 np0005604375 python3.9[224661]: ansible-ansible.builtin.service_facts Invoked
Feb  1 10:05:18 np0005604375 network[224678]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  1 10:05:18 np0005604375 network[224679]: 'network-scripts' will be removed from distribution in near future.
Feb  1 10:05:18 np0005604375 network[224680]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:05:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:05:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:05:21 np0005604375 python3.9[224953]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:05:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:22 np0005604375 python3.9[225106]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:05:23 np0005604375 python3.9[225259]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:05:23 np0005604375 python3.9[225412]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:05:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:24 np0005604375 python3.9[225565]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:05:25 np0005604375 python3.9[225718]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:05:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:05:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:26 np0005604375 python3.9[225871]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:05:26 np0005604375 python3.9[226024]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:05:27 np0005604375 python3.9[226177]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:05:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:05:28 np0005604375 python3.9[226329]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:29 np0005604375 python3.9[226481]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:29 np0005604375 python3.9[226633]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:30 np0005604375 python3.9[226785]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:05:30 np0005604375 python3.9[226937]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:31 np0005604375 python3.9[227089]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:32 np0005604375 python3.9[227241]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:32 np0005604375 python3.9[227393]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:33 np0005604375 python3.9[227545]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:33 np0005604375 python3.9[227697]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:34 np0005604375 python3.9[227849]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:35 np0005604375 python3.9[228001]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:05:35 np0005604375 python3.9[228153]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:36 np0005604375 python3.9[228305]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:36 np0005604375 systemd[1]: virtnodedevd.service: Deactivated successfully.
Feb  1 10:05:36 np0005604375 python3.9[228458]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:05:37 np0005604375 python3.9[228610]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:05:37 np0005604375 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb  1 10:05:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:38 np0005604375 python3.9[228763]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  1 10:05:38 np0005604375 python3.9[228915]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  1 10:05:38 np0005604375 systemd[1]: Reloading.
Feb  1 10:05:39 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:05:39 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:05:39 np0005604375 python3.9[229102]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:05:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:40 np0005604375 python3.9[229255]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:05:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:05:41 np0005604375 python3.9[229408]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:05:41 np0005604375 podman[229410]: 2026-02-01 15:05:41.144993992 +0000 UTC m=+0.064755824 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Feb  1 10:05:41 np0005604375 podman[229411]: 2026-02-01 15:05:41.218767098 +0000 UTC m=+0.134331333 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb  1 10:05:41 np0005604375 python3.9[229606]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:05:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:42 np0005604375 python3.9[229759]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:05:42 np0005604375 python3.9[229912]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:05:43 np0005604375 python3.9[230065]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:05:43 np0005604375 python3.9[230218]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  1 10:05:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:45 np0005604375 python3.9[230371]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:05:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:05:46 np0005604375 python3.9[230523]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:05:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:46 np0005604375 python3.9[230675]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:05:47 np0005604375 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb  1 10:05:47 np0005604375 systemd[1]: virtqemud.service: Deactivated successfully.
Feb  1 10:05:47 np0005604375 python3.9[230827]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:05:47 np0005604375 python3.9[230982]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:05:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:48 np0005604375 python3.9[231134]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:05:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:05:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:05:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:05:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:05:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:05:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:05:49 np0005604375 python3.9[231286]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:05:49 np0005604375 python3.9[231438]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:05:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:50 np0005604375 python3.9[231590]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:05:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:05:51 np0005604375 python3.9[231742]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:05:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:05:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:56 np0005604375 python3.9[231894]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb  1 10:05:57 np0005604375 python3.9[232047]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  1 10:05:58 np0005604375 python3.9[232205]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  1 10:05:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:05:59 np0005604375 systemd-logind[786]: New session 50 of user zuul.
Feb  1 10:05:59 np0005604375 systemd[1]: Started Session 50 of User zuul.
Feb  1 10:05:59 np0005604375 systemd[1]: session-50.scope: Deactivated successfully.
Feb  1 10:05:59 np0005604375 systemd-logind[786]: Session 50 logged out. Waiting for processes to exit.
Feb  1 10:05:59 np0005604375 systemd-logind[786]: Removed session 50.
Feb  1 10:05:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:05:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:05:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:05:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:05:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:05:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:05:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:05:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:05:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:05:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:05:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:05:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:05:59 np0005604375 podman[232483]: 2026-02-01 15:05:59.702566058 +0000 UTC m=+0.048896371 container create f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  1 10:05:59 np0005604375 systemd[1]: Started libpod-conmon-f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745.scope.
Feb  1 10:05:59 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:05:59 np0005604375 podman[232483]: 2026-02-01 15:05:59.76469977 +0000 UTC m=+0.111030123 container init f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_satoshi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  1 10:05:59 np0005604375 podman[232483]: 2026-02-01 15:05:59.774863275 +0000 UTC m=+0.121193578 container start f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  1 10:05:59 np0005604375 flamboyant_satoshi[232550]: 167 167
Feb  1 10:05:59 np0005604375 podman[232483]: 2026-02-01 15:05:59.683394681 +0000 UTC m=+0.029725084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:05:59 np0005604375 podman[232483]: 2026-02-01 15:05:59.778227269 +0000 UTC m=+0.124557582 container attach f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_satoshi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  1 10:05:59 np0005604375 systemd[1]: libpod-f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745.scope: Deactivated successfully.
Feb  1 10:05:59 np0005604375 conmon[232550]: conmon f7210b7abd3a8ba94736 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745.scope/container/memory.events
Feb  1 10:05:59 np0005604375 podman[232483]: 2026-02-01 15:05:59.779786853 +0000 UTC m=+0.126117166 container died f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  1 10:05:59 np0005604375 systemd[1]: var-lib-containers-storage-overlay-4692f35500e3ce750378469c0f5d72f700c0ec8ea09e86e301e1520bd5e0dbb3-merged.mount: Deactivated successfully.
Feb  1 10:05:59 np0005604375 podman[232483]: 2026-02-01 15:05:59.812389526 +0000 UTC m=+0.158719839 container remove f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_satoshi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  1 10:05:59 np0005604375 systemd[1]: libpod-conmon-f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745.scope: Deactivated successfully.
Feb  1 10:05:59 np0005604375 python3.9[232549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:05:59 np0005604375 podman[232574]: 2026-02-01 15:05:59.939806958 +0000 UTC m=+0.039545960 container create c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_sutherland, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:05:59 np0005604375 systemd[1]: Started libpod-conmon-c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323.scope.
Feb  1 10:05:59 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:05:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d3a5065935f54f304c480070798a7d7704ff5357d9bc7446e8cbef005daa06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:05:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d3a5065935f54f304c480070798a7d7704ff5357d9bc7446e8cbef005daa06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:05:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d3a5065935f54f304c480070798a7d7704ff5357d9bc7446e8cbef005daa06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:05:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d3a5065935f54f304c480070798a7d7704ff5357d9bc7446e8cbef005daa06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:05:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d3a5065935f54f304c480070798a7d7704ff5357d9bc7446e8cbef005daa06/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:00 np0005604375 podman[232574]: 2026-02-01 15:06:00.009064969 +0000 UTC m=+0.108803981 container init c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:06:00 np0005604375 podman[232574]: 2026-02-01 15:06:00.014161132 +0000 UTC m=+0.113900144 container start c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  1 10:06:00 np0005604375 podman[232574]: 2026-02-01 15:06:00.017417733 +0000 UTC m=+0.117156745 container attach c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_sutherland, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  1 10:06:00 np0005604375 podman[232574]: 2026-02-01 15:05:59.926955258 +0000 UTC m=+0.026694280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:06:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:06:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:06:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:06:00 np0005604375 vigilant_sutherland[232614]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:06:00 np0005604375 vigilant_sutherland[232614]: --> All data devices are unavailable
Feb  1 10:06:00 np0005604375 python3.9[232718]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958359.3657653-986-69621164400684/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:06:00 np0005604375 podman[232574]: 2026-02-01 15:06:00.410939293 +0000 UTC m=+0.510678295 container died c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  1 10:06:00 np0005604375 systemd[1]: libpod-c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323.scope: Deactivated successfully.
Feb  1 10:06:00 np0005604375 systemd[1]: var-lib-containers-storage-overlay-85d3a5065935f54f304c480070798a7d7704ff5357d9bc7446e8cbef005daa06-merged.mount: Deactivated successfully.
Feb  1 10:06:00 np0005604375 podman[232574]: 2026-02-01 15:06:00.442086576 +0000 UTC m=+0.541825578 container remove c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_sutherland, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Feb  1 10:06:00 np0005604375 systemd[1]: libpod-conmon-c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323.scope: Deactivated successfully.
Feb  1 10:06:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:06:00 np0005604375 podman[232956]: 2026-02-01 15:06:00.791777188 +0000 UTC m=+0.034717924 container create 25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:06:00 np0005604375 systemd[1]: Started libpod-conmon-25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda.scope.
Feb  1 10:06:00 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:06:00 np0005604375 podman[232956]: 2026-02-01 15:06:00.854824075 +0000 UTC m=+0.097764831 container init 25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Feb  1 10:06:00 np0005604375 podman[232956]: 2026-02-01 15:06:00.859485686 +0000 UTC m=+0.102426422 container start 25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:06:00 np0005604375 podman[232956]: 2026-02-01 15:06:00.862249473 +0000 UTC m=+0.105190239 container attach 25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  1 10:06:00 np0005604375 nostalgic_mclean[232972]: 167 167
Feb  1 10:06:00 np0005604375 systemd[1]: libpod-25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda.scope: Deactivated successfully.
Feb  1 10:06:00 np0005604375 podman[232956]: 2026-02-01 15:06:00.863760206 +0000 UTC m=+0.106700942 container died 25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  1 10:06:00 np0005604375 podman[232956]: 2026-02-01 15:06:00.778143726 +0000 UTC m=+0.021084482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:06:00 np0005604375 systemd[1]: var-lib-containers-storage-overlay-12327ab51599c68be30c6cac0b27df374993e0c4fa0a7a2448b2aadfad33faaa-merged.mount: Deactivated successfully.
Feb  1 10:06:00 np0005604375 podman[232956]: 2026-02-01 15:06:00.889314432 +0000 UTC m=+0.132255168 container remove 25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:06:00 np0005604375 python3.9[232949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:06:00 np0005604375 systemd[1]: libpod-conmon-25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda.scope: Deactivated successfully.
Feb  1 10:06:00 np0005604375 podman[233002]: 2026-02-01 15:06:00.992159035 +0000 UTC m=+0.032101371 container create 4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cray, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:06:01 np0005604375 systemd[1]: Started libpod-conmon-4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3.scope.
Feb  1 10:06:01 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:06:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80461d8fdd1b30e494b0ec8d5c97d680d4fcb0420b6db635ae9759612a41c9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80461d8fdd1b30e494b0ec8d5c97d680d4fcb0420b6db635ae9759612a41c9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80461d8fdd1b30e494b0ec8d5c97d680d4fcb0420b6db635ae9759612a41c9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80461d8fdd1b30e494b0ec8d5c97d680d4fcb0420b6db635ae9759612a41c9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:01 np0005604375 podman[233002]: 2026-02-01 15:06:01.05478872 +0000 UTC m=+0.094731086 container init 4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cray, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:06:01 np0005604375 podman[233002]: 2026-02-01 15:06:01.059610715 +0000 UTC m=+0.099553081 container start 4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cray, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  1 10:06:01 np0005604375 podman[233002]: 2026-02-01 15:06:01.062950069 +0000 UTC m=+0.102892435 container attach 4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cray, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:06:01 np0005604375 podman[233002]: 2026-02-01 15:06:00.97737342 +0000 UTC m=+0.017315776 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:06:01 np0005604375 python3.9[233093]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]: {
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:    "0": [
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:        {
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "devices": [
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "/dev/loop3"
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            ],
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_name": "ceph_lv0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_size": "21470642176",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "name": "ceph_lv0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "tags": {
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.cluster_name": "ceph",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.crush_device_class": "",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.encrypted": "0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.objectstore": "bluestore",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.osd_id": "0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.type": "block",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.vdo": "0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.with_tpm": "0"
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            },
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "type": "block",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "vg_name": "ceph_vg0"
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:        }
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:    ],
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:    "1": [
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:        {
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "devices": [
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "/dev/loop4"
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            ],
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_name": "ceph_lv1",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_size": "21470642176",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "name": "ceph_lv1",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "tags": {
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.cluster_name": "ceph",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.crush_device_class": "",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.encrypted": "0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.objectstore": "bluestore",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.osd_id": "1",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.type": "block",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.vdo": "0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.with_tpm": "0"
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            },
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "type": "block",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "vg_name": "ceph_vg1"
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:        }
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:    ],
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:    "2": [
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:        {
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "devices": [
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "/dev/loop5"
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            ],
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_name": "ceph_lv2",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_size": "21470642176",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "name": "ceph_lv2",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "tags": {
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.cluster_name": "ceph",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.crush_device_class": "",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.encrypted": "0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.objectstore": "bluestore",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.osd_id": "2",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.type": "block",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.vdo": "0",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:                "ceph.with_tpm": "0"
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            },
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "type": "block",
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:            "vg_name": "ceph_vg2"
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:        }
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]:    ]
Feb  1 10:06:01 np0005604375 quizzical_cray[233062]: }
Feb  1 10:06:01 np0005604375 systemd[1]: libpod-4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3.scope: Deactivated successfully.
Feb  1 10:06:01 np0005604375 podman[233002]: 2026-02-01 15:06:01.298053719 +0000 UTC m=+0.337996085 container died 4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  1 10:06:01 np0005604375 systemd[1]: var-lib-containers-storage-overlay-d80461d8fdd1b30e494b0ec8d5c97d680d4fcb0420b6db635ae9759612a41c9b-merged.mount: Deactivated successfully.
Feb  1 10:06:01 np0005604375 podman[233002]: 2026-02-01 15:06:01.341153267 +0000 UTC m=+0.381095603 container remove 4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  1 10:06:01 np0005604375 systemd[1]: libpod-conmon-4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3.scope: Deactivated successfully.
Feb  1 10:06:01 np0005604375 podman[233319]: 2026-02-01 15:06:01.681936148 +0000 UTC m=+0.030186758 container create 5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  1 10:06:01 np0005604375 python3.9[233307]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:06:01 np0005604375 systemd[1]: Started libpod-conmon-5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3.scope.
Feb  1 10:06:01 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:06:01 np0005604375 podman[233319]: 2026-02-01 15:06:01.754968695 +0000 UTC m=+0.103219355 container init 5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackwell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:06:01 np0005604375 podman[233319]: 2026-02-01 15:06:01.760344705 +0000 UTC m=+0.108595315 container start 5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackwell, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  1 10:06:01 np0005604375 podman[233319]: 2026-02-01 15:06:01.763520144 +0000 UTC m=+0.111770774 container attach 5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackwell, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:06:01 np0005604375 frosty_blackwell[233333]: 167 167
Feb  1 10:06:01 np0005604375 podman[233319]: 2026-02-01 15:06:01.668847141 +0000 UTC m=+0.017097771 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:06:01 np0005604375 systemd[1]: libpod-5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3.scope: Deactivated successfully.
Feb  1 10:06:01 np0005604375 podman[233319]: 2026-02-01 15:06:01.767783714 +0000 UTC m=+0.116034364 container died 5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  1 10:06:01 np0005604375 systemd[1]: var-lib-containers-storage-overlay-fc94fdfde3c7a3fc467a6141ccf10a75793a3b02df435cbccc47b25bbf30da1e-merged.mount: Deactivated successfully.
Feb  1 10:06:01 np0005604375 podman[233319]: 2026-02-01 15:06:01.813155176 +0000 UTC m=+0.161405786 container remove 5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:06:01 np0005604375 systemd[1]: libpod-conmon-5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3.scope: Deactivated successfully.
Feb  1 10:06:01 np0005604375 podman[233427]: 2026-02-01 15:06:01.966767801 +0000 UTC m=+0.061191386 container create 162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  1 10:06:02 np0005604375 systemd[1]: Started libpod-conmon-162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1.scope.
Feb  1 10:06:02 np0005604375 podman[233427]: 2026-02-01 15:06:01.942995645 +0000 UTC m=+0.037419330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:06:02 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:06:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54505dd67e240dfd07ce638dd4957e4c7aceb04b3840713fd149efcd60ac626/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54505dd67e240dfd07ce638dd4957e4c7aceb04b3840713fd149efcd60ac626/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54505dd67e240dfd07ce638dd4957e4c7aceb04b3840713fd149efcd60ac626/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:02 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54505dd67e240dfd07ce638dd4957e4c7aceb04b3840713fd149efcd60ac626/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:02 np0005604375 podman[233427]: 2026-02-01 15:06:02.059712346 +0000 UTC m=+0.154135971 container init 162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kepler, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  1 10:06:02 np0005604375 podman[233427]: 2026-02-01 15:06:02.069540312 +0000 UTC m=+0.163963937 container start 162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kepler, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:06:02 np0005604375 podman[233427]: 2026-02-01 15:06:02.072877565 +0000 UTC m=+0.167301180 container attach 162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kepler, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:06:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:02 np0005604375 python3.9[233498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958361.3530967-986-6520375956519/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:06:02 np0005604375 lvm[233722]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:06:02 np0005604375 lvm[233722]: VG ceph_vg0 finished
Feb  1 10:06:02 np0005604375 lvm[233725]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:06:02 np0005604375 lvm[233725]: VG ceph_vg1 finished
Feb  1 10:06:02 np0005604375 lvm[233727]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:06:02 np0005604375 lvm[233727]: VG ceph_vg2 finished
Feb  1 10:06:02 np0005604375 python3.9[233706]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:06:02 np0005604375 determined_kepler[233485]: {}
Feb  1 10:06:02 np0005604375 systemd[1]: libpod-162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1.scope: Deactivated successfully.
Feb  1 10:06:02 np0005604375 systemd[1]: libpod-162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1.scope: Consumed 1.201s CPU time.
Feb  1 10:06:02 np0005604375 podman[233427]: 2026-02-01 15:06:02.867067156 +0000 UTC m=+0.961490751 container died 162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kepler, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:06:02 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c54505dd67e240dfd07ce638dd4957e4c7aceb04b3840713fd149efcd60ac626-merged.mount: Deactivated successfully.
Feb  1 10:06:02 np0005604375 podman[233427]: 2026-02-01 15:06:02.905205495 +0000 UTC m=+0.999629070 container remove 162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kepler, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  1 10:06:02 np0005604375 systemd[1]: libpod-conmon-162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1.scope: Deactivated successfully.
Feb  1 10:06:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:06:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:06:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:06:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:06:03 np0005604375 python3.9[233889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958362.3567755-986-19488493019293/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:06:03 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:06:03 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:06:03 np0005604375 python3.9[234039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:06:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:04 np0005604375 python3.9[234160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958363.3419368-986-115254137798906/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:06:04 np0005604375 python3.9[234310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:06:05 np0005604375 python3.9[234431]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958364.520509-986-242614210123598/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:06:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:06:05 np0005604375 python3.9[234583]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:06:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:06 np0005604375 python3.9[234735]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:06:07 np0005604375 python3.9[234887]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:06:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:06:07.799 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:06:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:06:07.800 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:06:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:06:07.801 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:06:07 np0005604375 python3.9[235039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:06:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:08 np0005604375 python3.9[235162]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769958367.4356313-1093-220117525042189/.source _original_basename=.t4spuymh follow=False checksum=390336b6fd37bd6abc6a51be59667203fda4a8f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Feb  1 10:06:09 np0005604375 python3.9[235314]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:06:09 np0005604375 python3.9[235466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:06:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:10 np0005604375 python3.9[235587]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958369.4889321-1119-267632565637640/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:06:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:06:11 np0005604375 python3.9[235737]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  1 10:06:11 np0005604375 podman[235833]: 2026-02-01 15:06:11.397264211 +0000 UTC m=+0.064620362 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb  1 10:06:11 np0005604375 podman[235832]: 2026-02-01 15:06:11.40793309 +0000 UTC m=+0.075289361 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  1 10:06:11 np0005604375 python3.9[235879]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958370.6156359-1134-54541667157450/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  1 10:06:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:12 np0005604375 python3.9[236054]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Feb  1 10:06:13 np0005604375 python3.9[236206]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  1 10:06:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:14 np0005604375 python3[236358]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb  1 10:06:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:06:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:06:17
Feb  1 10:06:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:06:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:06:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes']
Feb  1 10:06:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:06:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:06:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:06:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:24 np0005604375 podman[236371]: 2026-02-01 15:06:24.232734457 +0000 UTC m=+9.617868961 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb  1 10:06:24 np0005604375 podman[236459]: 2026-02-01 15:06:24.379528751 +0000 UTC m=+0.050938459 container create 4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb  1 10:06:24 np0005604375 podman[236459]: 2026-02-01 15:06:24.351950648 +0000 UTC m=+0.023360356 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb  1 10:06:24 np0005604375 python3[236358]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb  1 10:06:25 np0005604375 python3.9[236649]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:06:25 np0005604375 python3.9[236803]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Feb  1 10:06:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:06:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:26 np0005604375 python3.9[236955]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  1 10:06:27 np0005604375 python3[237107]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb  1 10:06:27 np0005604375 podman[237145]: 2026-02-01 15:06:27.980152104 +0000 UTC m=+0.057547794 container create ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=edpm)
Feb  1 10:06:27 np0005604375 podman[237145]: 2026-02-01 15:06:27.949652709 +0000 UTC m=+0.027048489 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb  1 10:06:27 np0005604375 python3[237107]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:06:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:06:28 np0005604375 python3.9[237334]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:06:29 np0005604375 python3.9[237488]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:06:29 np0005604375 python3.9[237639]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769958389.2328014-1230-12339257402297/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  1 10:06:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:30 np0005604375 python3.9[237715]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  1 10:06:30 np0005604375 systemd[1]: Reloading.
Feb  1 10:06:30 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:06:30 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:06:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:06:31 np0005604375 python3.9[237827]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  1 10:06:31 np0005604375 systemd[1]: Reloading.
Feb  1 10:06:31 np0005604375 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  1 10:06:31 np0005604375 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  1 10:06:31 np0005604375 systemd[1]: Starting nova_compute container...
Feb  1 10:06:31 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:06:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:31 np0005604375 podman[237867]: 2026-02-01 15:06:31.752103779 +0000 UTC m=+0.091397383 container init ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  1 10:06:31 np0005604375 podman[237867]: 2026-02-01 15:06:31.760540405 +0000 UTC m=+0.099833999 container start ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible)
Feb  1 10:06:31 np0005604375 podman[237867]: nova_compute
Feb  1 10:06:31 np0005604375 systemd[1]: Started nova_compute container.
Feb  1 10:06:31 np0005604375 nova_compute[237882]: + sudo -E kolla_set_configs
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Validating config file
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Copying service configuration files
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Deleting /etc/ceph
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Creating directory /etc/ceph
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /etc/ceph
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Writing out command to execute
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  1 10:06:31 np0005604375 nova_compute[237882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  1 10:06:31 np0005604375 nova_compute[237882]: ++ cat /run_command
Feb  1 10:06:31 np0005604375 nova_compute[237882]: + CMD=nova-compute
Feb  1 10:06:31 np0005604375 nova_compute[237882]: + ARGS=
Feb  1 10:06:31 np0005604375 nova_compute[237882]: + sudo kolla_copy_cacerts
Feb  1 10:06:31 np0005604375 nova_compute[237882]: + [[ ! -n '' ]]
Feb  1 10:06:31 np0005604375 nova_compute[237882]: + . kolla_extend_start
Feb  1 10:06:31 np0005604375 nova_compute[237882]: Running command: 'nova-compute'
Feb  1 10:06:31 np0005604375 nova_compute[237882]: + echo 'Running command: '\''nova-compute'\'''
Feb  1 10:06:31 np0005604375 nova_compute[237882]: + umask 0022
Feb  1 10:06:31 np0005604375 nova_compute[237882]: + exec nova-compute
Feb  1 10:06:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:32 np0005604375 python3.9[238043]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:06:33 np0005604375 python3.9[238194]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:06:33 np0005604375 python3.9[238344]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  1 10:06:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:34 np0005604375 nova_compute[237882]: 2026-02-01 15:06:34.143 237886 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  1 10:06:34 np0005604375 nova_compute[237882]: 2026-02-01 15:06:34.143 237886 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  1 10:06:34 np0005604375 nova_compute[237882]: 2026-02-01 15:06:34.143 237886 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  1 10:06:34 np0005604375 nova_compute[237882]: 2026-02-01 15:06:34.144 237886 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb  1 10:06:34 np0005604375 nova_compute[237882]: 2026-02-01 15:06:34.283 237886 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:06:34 np0005604375 nova_compute[237882]: 2026-02-01 15:06:34.301 237886 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:06:34 np0005604375 nova_compute[237882]: 2026-02-01 15:06:34.301 237886 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb  1 10:06:34 np0005604375 python3.9[238500]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb  1 10:06:34 np0005604375 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  1 10:06:34 np0005604375 nova_compute[237882]: 2026-02-01 15:06:34.875 237886 INFO nova.virt.driver [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.083 237886 INFO nova.compute.provider_config [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.210 237886 DEBUG oslo_concurrency.lockutils [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.211 237886 DEBUG oslo_concurrency.lockutils [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.211 237886 DEBUG oslo_concurrency.lockutils [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.212 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.212 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.213 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.213 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.213 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.214 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.214 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.214 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.215 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.215 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.216 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.216 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.217 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.217 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.217 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.218 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.218 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.218 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.219 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.219 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.219 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.220 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.220 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.220 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.221 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.221 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.221 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.222 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.222 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.222 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.223 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.223 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.223 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.224 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.224 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.225 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.225 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.225 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.226 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.226 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.226 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.227 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.227 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.228 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.228 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.229 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.229 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.230 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.230 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.230 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.231 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.231 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.231 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.232 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.232 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.232 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.233 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.233 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.233 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.234 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.234 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.234 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.235 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.235 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.235 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.236 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.236 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.236 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.237 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.237 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.238 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.238 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.238 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.239 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.239 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.239 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.240 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.240 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.240 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.241 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.241 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.242 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.242 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.242 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.242 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.243 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.243 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.243 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.244 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.244 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.245 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.245 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.245 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.245 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.246 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.246 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.246 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.247 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.247 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.247 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.248 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.248 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.248 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.249 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.249 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.249 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.250 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.250 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.250 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.251 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.251 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.251 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.252 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.252 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.252 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.253 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.253 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.253 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.254 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.254 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.254 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.255 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.255 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.255 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.256 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.256 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.256 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.256 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.257 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.257 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.257 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.257 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.257 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.258 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.258 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.258 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.258 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.258 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.259 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.259 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.259 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.259 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.259 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.260 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.260 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.260 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.260 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.260 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.261 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.261 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.261 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.261 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.261 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.262 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.262 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.262 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.262 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.262 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.263 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.263 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.263 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.263 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.263 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.264 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.264 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.264 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.264 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.264 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.265 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.265 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.265 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.265 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.266 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.266 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.266 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.266 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.267 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.267 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.267 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.268 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.268 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.268 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.269 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.269 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.269 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.269 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.270 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.270 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.270 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.270 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.271 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.271 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.271 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.271 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.271 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.272 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.272 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.272 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.272 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.272 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.273 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.273 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.273 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.273 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.273 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.274 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.274 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.274 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.274 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.275 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.275 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.275 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.275 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.275 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.276 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.276 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.276 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.277 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.277 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.277 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.277 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.277 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.278 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.278 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.278 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.278 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.278 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.279 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.279 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.279 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.279 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.279 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.280 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.280 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.280 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.280 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.280 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.281 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.281 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.281 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.281 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.281 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.282 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.282 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.282 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.282 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.282 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.283 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.283 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.283 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.283 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.284 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.284 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.284 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.284 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.285 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.285 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.285 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.285 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.286 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.286 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.286 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.286 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.286 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.287 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.287 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.287 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.287 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.287 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.288 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.288 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.288 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.288 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.288 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.288 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.289 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.289 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.289 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.289 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.289 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.289 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.290 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.290 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.290 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.290 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.290 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.290 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.292 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.292 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.292 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.292 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.292 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.293 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.293 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.293 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.293 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.293 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.293 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.295 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.295 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.295 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.295 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.295 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.295 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.301 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.301 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.301 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.301 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.301 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.303 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.303 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.303 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.303 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.303 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.305 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.305 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.305 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.305 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.305 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.305 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.308 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.308 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.308 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.308 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.308 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.308 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.311 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.311 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.311 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.311 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.311 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.312 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.312 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.312 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.312 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.312 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.312 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.315 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.315 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.315 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.315 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.315 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.315 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.322 237886 WARNING oslo_config.cfg [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb  1 10:06:35 np0005604375 nova_compute[237882]: live_migration_uri is deprecated for removal in favor of two other options that
Feb  1 10:06:35 np0005604375 nova_compute[237882]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb  1 10:06:35 np0005604375 nova_compute[237882]: and ``live_migration_inbound_addr`` respectively.
Feb  1 10:06:35 np0005604375 nova_compute[237882]: ).  Its value may be silently ignored in the future.#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.322 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.322 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.322 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.322 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.322 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.323 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.323 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.323 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.323 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.323 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.323 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rbd_secret_uuid        = 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.327 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.327 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.327 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.327 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.327 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.327 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.341 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.341 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.341 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.341 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.341 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.341 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.342 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.342 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.342 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.342 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.342 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.342 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.344 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.344 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.344 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.344 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.344 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.344 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.348 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.348 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.348 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.348 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.348 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.348 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.358 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.358 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.358 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.358 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.358 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.358 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.360 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.360 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.360 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.360 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.360 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.360 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.363 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.363 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.363 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.363 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.363 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.363 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.365 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.365 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.365 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.365 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.365 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.365 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.366 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.366 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.366 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.366 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.366 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.366 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.367 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.367 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.367 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.367 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.367 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.368 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.368 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.368 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.368 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.368 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.369 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.369 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.369 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.369 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.369 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.370 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.370 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.370 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.370 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.370 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.370 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.371 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.371 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.371 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.371 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.371 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.372 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.372 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.372 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.372 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.372 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.373 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.373 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.373 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.373 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.373 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.374 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.374 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.374 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.374 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.374 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.375 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.375 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.375 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.375 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.375 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.375 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.376 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.376 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.376 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.376 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.376 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.377 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.377 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.377 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.377 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.377 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.377 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.378 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.378 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.378 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.378 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.378 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.378 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.379 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.379 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.379 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.379 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.379 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.380 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.380 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.380 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.380 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.380 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.381 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.381 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.381 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.381 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.381 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.381 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.382 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.382 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.382 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.382 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.382 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.383 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.383 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.383 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.383 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.383 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.383 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.384 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.384 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.384 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.384 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.385 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.385 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.385 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.385 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.385 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.385 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.386 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.386 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.386 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.386 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.386 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.387 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.387 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.387 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.387 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.387 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.387 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.388 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.388 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.388 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.388 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.388 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.389 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.389 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.389 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.389 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.390 237886 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.406 237886 DEBUG nova.virt.libvirt.host [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.407 237886 DEBUG nova.virt.libvirt.host [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.407 237886 DEBUG nova.virt.libvirt.host [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.407 237886 DEBUG nova.virt.libvirt.host [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb  1 10:06:35 np0005604375 systemd[1]: Starting libvirt QEMU daemon...
Feb  1 10:06:35 np0005604375 systemd[1]: Started libvirt QEMU daemon.
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.485 237886 DEBUG nova.virt.libvirt.host [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4761a2b3d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.489 237886 DEBUG nova.virt.libvirt.host [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4761a2b3d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.490 237886 INFO nova.virt.libvirt.driver [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Connection event '1' reason 'None'#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.518 237886 WARNING nova.virt.libvirt.driver [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.519 237886 DEBUG nova.virt.libvirt.volume.mount [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb  1 10:06:35 np0005604375 python3.9[238674]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  1 10:06:35 np0005604375 systemd[1]: Stopping nova_compute container...
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.712 237886 DEBUG oslo_concurrency.lockutils [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.712 237886 DEBUG oslo_concurrency.lockutils [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  1 10:06:35 np0005604375 nova_compute[237882]: 2026-02-01 15:06:35.713 237886 DEBUG oslo_concurrency.lockutils [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  1 10:06:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:06:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:36 np0005604375 virtqemud[238696]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Feb  1 10:06:36 np0005604375 virtqemud[238696]: hostname: compute-0
Feb  1 10:06:36 np0005604375 virtqemud[238696]: End of file while reading data: Input/output error
Feb  1 10:06:36 np0005604375 systemd[1]: libpod-ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5.scope: Deactivated successfully.
Feb  1 10:06:36 np0005604375 systemd[1]: libpod-ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5.scope: Consumed 2.854s CPU time.
Feb  1 10:06:36 np0005604375 podman[238730]: 2026-02-01 15:06:36.703531003 +0000 UTC m=+1.026791681 container died ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  1 10:06:36 np0005604375 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5-userdata-shm.mount: Deactivated successfully.
Feb  1 10:06:36 np0005604375 systemd[1]: var-lib-containers-storage-overlay-10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f-merged.mount: Deactivated successfully.
Feb  1 10:06:37 np0005604375 podman[238730]: 2026-02-01 15:06:37.766223619 +0000 UTC m=+2.089484277 container cleanup ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:06:37 np0005604375 podman[238730]: nova_compute
Feb  1 10:06:37 np0005604375 podman[238765]: nova_compute
Feb  1 10:06:37 np0005604375 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb  1 10:06:37 np0005604375 systemd[1]: Stopped nova_compute container.
Feb  1 10:06:37 np0005604375 systemd[1]: Starting nova_compute container...
Feb  1 10:06:37 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:06:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:37 np0005604375 podman[238778]: 2026-02-01 15:06:37.986893894 +0000 UTC m=+0.112342270 container init ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Feb  1 10:06:37 np0005604375 podman[238778]: 2026-02-01 15:06:37.9949756 +0000 UTC m=+0.120423936 container start ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  1 10:06:37 np0005604375 podman[238778]: nova_compute
Feb  1 10:06:38 np0005604375 systemd[1]: Started nova_compute container.
Feb  1 10:06:38 np0005604375 nova_compute[238794]: + sudo -E kolla_set_configs
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Validating config file
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Copying service configuration files
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Deleting /etc/ceph
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Creating directory /etc/ceph
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /etc/ceph
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Writing out command to execute
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  1 10:06:38 np0005604375 nova_compute[238794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  1 10:06:38 np0005604375 nova_compute[238794]: ++ cat /run_command
Feb  1 10:06:38 np0005604375 nova_compute[238794]: + CMD=nova-compute
Feb  1 10:06:38 np0005604375 nova_compute[238794]: + ARGS=
Feb  1 10:06:38 np0005604375 nova_compute[238794]: + sudo kolla_copy_cacerts
Feb  1 10:06:38 np0005604375 nova_compute[238794]: + [[ ! -n '' ]]
Feb  1 10:06:38 np0005604375 nova_compute[238794]: + . kolla_extend_start
Feb  1 10:06:38 np0005604375 nova_compute[238794]: Running command: 'nova-compute'
Feb  1 10:06:38 np0005604375 nova_compute[238794]: + echo 'Running command: '\''nova-compute'\'''
Feb  1 10:06:38 np0005604375 nova_compute[238794]: + umask 0022
Feb  1 10:06:38 np0005604375 nova_compute[238794]: + exec nova-compute
Feb  1 10:06:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:38 np0005604375 python3.9[238957]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb  1 10:06:38 np0005604375 systemd[1]: Started libpod-conmon-4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac.scope.
Feb  1 10:06:38 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:06:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee8b5b79cd15fa1ebd9285807549a17e5193bbb347b38d8b3e15df23e9d4932/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee8b5b79cd15fa1ebd9285807549a17e5193bbb347b38d8b3e15df23e9d4932/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:38 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee8b5b79cd15fa1ebd9285807549a17e5193bbb347b38d8b3e15df23e9d4932/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb  1 10:06:39 np0005604375 podman[238982]: 2026-02-01 15:06:39.010760892 +0000 UTC m=+0.145886060 container init 4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:06:39 np0005604375 podman[238982]: 2026-02-01 15:06:39.020352901 +0000 UTC m=+0.155478039 container start 4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb  1 10:06:39 np0005604375 python3.9[238957]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Applying nova statedir ownership
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb  1 10:06:39 np0005604375 nova_compute_init[239004]: INFO:nova_statedir:Nova statedir ownership complete
Feb  1 10:06:39 np0005604375 systemd[1]: libpod-4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac.scope: Deactivated successfully.
Feb  1 10:06:39 np0005604375 podman[239005]: 2026-02-01 15:06:39.086474944 +0000 UTC m=+0.038897161 container died 4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm)
Feb  1 10:06:39 np0005604375 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac-userdata-shm.mount: Deactivated successfully.
Feb  1 10:06:39 np0005604375 systemd[1]: var-lib-containers-storage-overlay-9ee8b5b79cd15fa1ebd9285807549a17e5193bbb347b38d8b3e15df23e9d4932-merged.mount: Deactivated successfully.
Feb  1 10:06:39 np0005604375 podman[239015]: 2026-02-01 15:06:39.13805056 +0000 UTC m=+0.050248489 container cleanup 4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  1 10:06:39 np0005604375 systemd[1]: libpod-conmon-4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac.scope: Deactivated successfully.
Feb  1 10:06:39 np0005604375 systemd[1]: session-49.scope: Deactivated successfully.
Feb  1 10:06:39 np0005604375 systemd[1]: session-49.scope: Consumed 1min 43.046s CPU time.
Feb  1 10:06:39 np0005604375 systemd-logind[786]: Session 49 logged out. Waiting for processes to exit.
Feb  1 10:06:39 np0005604375 systemd-logind[786]: Removed session 49.
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.085 238798 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.086 238798 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.086 238798 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.086 238798 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb  1 10:06:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.284 238798 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.310 238798 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.311 238798 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.816 238798 INFO nova.virt.driver [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.954 238798 INFO nova.compute.provider_config [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.978 238798 DEBUG oslo_concurrency.lockutils [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.978 238798 DEBUG oslo_concurrency.lockutils [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.979 238798 DEBUG oslo_concurrency.lockutils [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.979 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.980 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.980 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.980 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.980 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.980 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.980 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.981 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.981 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.981 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.981 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.981 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.981 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.983 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.983 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.983 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.983 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.983 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.983 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.984 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.984 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.984 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.984 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.984 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.984 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.986 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.986 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.986 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.986 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.986 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.986 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.987 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.987 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.987 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.987 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.987 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.987 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.992 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.992 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.992 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.992 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.992 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.992 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.998 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.998 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.998 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.998 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.998 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.998 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.999 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.999 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.999 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:40 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.999 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.999 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:40.999 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.000 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.000 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.000 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.000 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.000 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.000 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.002 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.002 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.002 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.002 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.002 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.002 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.003 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.003 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.003 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.003 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.003 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.003 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.006 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.006 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.006 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.006 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.006 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.006 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.007 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.007 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.007 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.007 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.007 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.007 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.010 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.010 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.010 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.010 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.010 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.010 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.013 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.013 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.013 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.013 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.013 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.014 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.014 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.014 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.014 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.014 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.014 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.015 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.015 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.015 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.015 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.015 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.015 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.016 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.016 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.016 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.016 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.016 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.017 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.017 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.017 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.017 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.017 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.017 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.018 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.018 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.018 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.018 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.018 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.019 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.019 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.019 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.019 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.019 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.019 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.020 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.020 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.020 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.020 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.020 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.021 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.021 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.021 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.021 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.021 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.021 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.026 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.026 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.026 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.026 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.026 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.026 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.034 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.034 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.034 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.035 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.035 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.035 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.035 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.035 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.035 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.036 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.036 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.036 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.036 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.036 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.036 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.037 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.037 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.037 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.037 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.037 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.038 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.038 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.038 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.038 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.038 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.038 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.039 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.039 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.039 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.039 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.039 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.039 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.040 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.040 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.040 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.040 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.040 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.040 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.041 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.041 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.041 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.041 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.041 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.041 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.043 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.043 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.043 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.043 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.043 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.043 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.045 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.045 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.045 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.045 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.045 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.045 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.047 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.047 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.047 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.047 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.047 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.047 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.050 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.050 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.050 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.050 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.050 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.050 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.052 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.052 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.052 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.052 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.052 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.052 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.053 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.053 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.053 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.053 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.053 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.053 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.056 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.056 238798 WARNING oslo_config.cfg [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb  1 10:06:41 np0005604375 nova_compute[238794]: live_migration_uri is deprecated for removal in favor of two other options that
Feb  1 10:06:41 np0005604375 nova_compute[238794]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb  1 10:06:41 np0005604375 nova_compute[238794]: and ``live_migration_inbound_addr`` respectively.
Feb  1 10:06:41 np0005604375 nova_compute[238794]: ).  Its value may be silently ignored in the future.#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.056 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.056 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.057 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.057 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.057 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.057 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.057 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.057 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.058 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.058 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.058 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.058 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.058 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.058 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rbd_secret_uuid        = 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.063 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.063 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.063 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.063 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.063 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.063 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.069 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.069 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.069 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.069 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.069 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.069 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.072 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.072 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.072 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.072 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.072 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.072 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.073 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.073 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.073 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.073 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.073 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.073 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.075 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.075 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.075 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.075 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.075 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.075 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.077 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.077 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.077 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.077 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.077 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.083 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.083 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.083 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.083 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.083 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.083 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.086 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.086 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.086 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.086 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.086 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.086 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.091 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.091 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.091 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.091 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.091 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.091 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.093 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.093 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.093 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.093 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.093 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.093 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.094 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.094 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.094 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.094 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.094 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.094 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.100 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.100 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.100 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.100 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.100 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.100 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.118 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.118 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.118 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.118 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.119 238798 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.145 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.146 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.146 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.146 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.161 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f755fa4d250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.164 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f755fa4d250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.164 238798 INFO nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Connection event '1' reason 'None'#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.170 238798 INFO nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Libvirt host capabilities <capabilities>
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <host>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <uuid>072bb88e-d455-426c-a850-83903b041dc8</uuid>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <cpu>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <arch>x86_64</arch>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model>EPYC-Rome-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <vendor>AMD</vendor>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <microcode version='16777317'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <signature family='23' model='49' stepping='0'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <maxphysaddr mode='emulate' bits='40'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='x2apic'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='tsc-deadline'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='osxsave'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='hypervisor'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='tsc_adjust'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='spec-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='stibp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='arch-capabilities'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='cmp_legacy'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='topoext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='virt-ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='lbrv'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='tsc-scale'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='vmcb-clean'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='pause-filter'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='pfthreshold'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='svme-addr-chk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='rdctl-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='skip-l1dfl-vmentry'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='mds-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature name='pschange-mc-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <pages unit='KiB' size='4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <pages unit='KiB' size='2048'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <pages unit='KiB' size='1048576'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </cpu>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <power_management>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <suspend_mem/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </power_management>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <iommu support='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <migration_features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <live/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <uri_transports>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <uri_transport>tcp</uri_transport>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <uri_transport>rdma</uri_transport>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </uri_transports>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </migration_features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <topology>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <cells num='1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <cell id='0'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:          <memory unit='KiB'>7864300</memory>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:          <pages unit='KiB' size='4'>1966075</pages>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:          <pages unit='KiB' size='2048'>0</pages>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:          <pages unit='KiB' size='1048576'>0</pages>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:          <distances>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:            <sibling id='0' value='10'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:          </distances>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:          <cpus num='8'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:          </cpus>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        </cell>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </cells>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </topology>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <cache>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </cache>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <secmodel>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model>selinux</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <doi>0</doi>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </secmodel>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <secmodel>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model>dac</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <doi>0</doi>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <baselabel type='kvm'>+107:+107</baselabel>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <baselabel type='qemu'>+107:+107</baselabel>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </secmodel>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </host>
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <guest>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <os_type>hvm</os_type>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <arch name='i686'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <wordsize>32</wordsize>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <domain type='qemu'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <domain type='kvm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </arch>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <pae/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <nonpae/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <acpi default='on' toggle='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <apic default='on' toggle='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <cpuselection/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <deviceboot/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <disksnapshot default='on' toggle='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <externalSnapshot/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </guest>
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <guest>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <os_type>hvm</os_type>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <arch name='x86_64'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <wordsize>64</wordsize>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <domain type='qemu'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <domain type='kvm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </arch>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <acpi default='on' toggle='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <apic default='on' toggle='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <cpuselection/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <deviceboot/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <disksnapshot default='on' toggle='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <externalSnapshot/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </guest>
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 
Feb  1 10:06:41 np0005604375 nova_compute[238794]: </capabilities>
Feb  1 10:06:41 np0005604375 nova_compute[238794]: #033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.176 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.181 238798 WARNING nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.181 238798 DEBUG nova.virt.libvirt.volume.mount [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.204 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb  1 10:06:41 np0005604375 nova_compute[238794]: <domainCapabilities>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <path>/usr/libexec/qemu-kvm</path>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <domain>kvm</domain>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <arch>i686</arch>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <vcpu max='240'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <iothreads supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <os supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <enum name='firmware'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <loader supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>rom</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pflash</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='readonly'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>yes</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>no</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='secure'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>no</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </loader>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </os>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <cpu>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='host-passthrough' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='hostPassthroughMigratable'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>on</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>off</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='maximum' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='maximumMigratable'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>on</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>off</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='host-model' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <vendor>AMD</vendor>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='x2apic'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='tsc-deadline'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='hypervisor'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='tsc_adjust'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='spec-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='stibp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='cmp_legacy'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='overflow-recov'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='succor'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='amd-ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='virt-ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='lbrv'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='tsc-scale'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='vmcb-clean'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='flushbyasid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='pause-filter'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='pfthreshold'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='svme-addr-chk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='disable' name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='custom' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='ClearwaterForest'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ddpd-u'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sha512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm3'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='ClearwaterForest-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ddpd-u'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sha512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm3'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cooperlake'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cooperlake-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cooperlake-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Dhyana-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Genoa'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Genoa-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Genoa-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fs-gs-base-ns'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='perfmon-v2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Turin'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vp2intersect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fs-gs-base-ns'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibpb-brtype'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='perfmon-v2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbpb'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='srso-user-kernel-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Turin-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vp2intersect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fs-gs-base-ns'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibpb-brtype'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='perfmon-v2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbpb'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='srso-user-kernel-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-128'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-256'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-128'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-256'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v6'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v7'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='KnightsMill'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4fmaps'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4vnniw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512er'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512pf'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='KnightsMill-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4fmaps'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4vnniw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512er'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512pf'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G4-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tbm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G5-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tbm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='athlon'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='athlon-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='core2duo'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='core2duo-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='coreduo'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='coreduo-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='n270'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='n270-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='phenom'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='phenom-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </cpu>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <memoryBacking supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <enum name='sourceType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>file</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>anonymous</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>memfd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </memoryBacking>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <devices>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <disk supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='diskDevice'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>disk</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>cdrom</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>floppy</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>lun</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='bus'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>ide</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>fdc</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>scsi</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>usb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>sata</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-non-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </disk>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <graphics supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vnc</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>egl-headless</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>dbus</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </graphics>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <video supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='modelType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vga</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>cirrus</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>none</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>bochs</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>ramfb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </video>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <hostdev supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='mode'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>subsystem</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='startupPolicy'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>default</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>mandatory</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>requisite</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>optional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='subsysType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>usb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pci</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>scsi</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='capsType'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='pciBackend'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </hostdev>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <rng supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-non-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendModel'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>random</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>egd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>builtin</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </rng>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <filesystem supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='driverType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>path</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>handle</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtiofs</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </filesystem>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <tpm supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tpm-tis</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tpm-crb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendModel'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>emulator</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>external</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendVersion'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>2.0</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </tpm>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <redirdev supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='bus'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>usb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </redirdev>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <channel supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pty</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>unix</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </channel>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <crypto supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>qemu</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendModel'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>builtin</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </crypto>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <interface supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>default</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>passt</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </interface>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <panic supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>isa</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>hyperv</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </panic>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <console supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>null</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vc</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pty</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>dev</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>file</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pipe</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>stdio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>udp</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tcp</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>unix</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>qemu-vdagent</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>dbus</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </console>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </devices>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <gic supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <vmcoreinfo supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <genid supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <backingStoreInput supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <backup supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <async-teardown supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <s390-pv supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <ps2 supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <tdx supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <sev supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <sgx supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <hyperv supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='features'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>relaxed</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vapic</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>spinlocks</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vpindex</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>runtime</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>synic</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>stimer</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>reset</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vendor_id</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>frequencies</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>reenlightenment</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tlbflush</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>ipi</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>avic</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>emsr_bitmap</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>xmm_input</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <defaults>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <spinlocks>4095</spinlocks>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <stimer_direct>on</stimer_direct>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <tlbflush_direct>on</tlbflush_direct>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <tlbflush_extended>on</tlbflush_extended>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </defaults>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </hyperv>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <launchSecurity supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]: </domainCapabilities>
Feb  1 10:06:41 np0005604375 nova_compute[238794]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.211 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb  1 10:06:41 np0005604375 nova_compute[238794]: <domainCapabilities>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <path>/usr/libexec/qemu-kvm</path>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <domain>kvm</domain>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <machine>pc-q35-rhel9.8.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <arch>i686</arch>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <vcpu max='4096'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <iothreads supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <os supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <enum name='firmware'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <loader supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>rom</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pflash</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='readonly'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>yes</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>no</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='secure'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>no</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </loader>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </os>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <cpu>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='host-passthrough' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='hostPassthroughMigratable'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>on</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>off</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='maximum' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='maximumMigratable'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>on</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>off</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='host-model' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <vendor>AMD</vendor>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='x2apic'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='tsc-deadline'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='hypervisor'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='tsc_adjust'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='spec-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='stibp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='cmp_legacy'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='overflow-recov'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='succor'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='amd-ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='virt-ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='lbrv'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='tsc-scale'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='vmcb-clean'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='flushbyasid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='pause-filter'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='pfthreshold'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='svme-addr-chk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='disable' name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='custom' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='ClearwaterForest'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ddpd-u'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sha512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm3'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='ClearwaterForest-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ddpd-u'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sha512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm3'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cooperlake'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cooperlake-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cooperlake-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Dhyana-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Genoa'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Genoa-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Genoa-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fs-gs-base-ns'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='perfmon-v2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Turin'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vp2intersect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fs-gs-base-ns'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibpb-brtype'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='perfmon-v2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbpb'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='srso-user-kernel-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Turin-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vp2intersect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fs-gs-base-ns'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibpb-brtype'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='perfmon-v2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbpb'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='srso-user-kernel-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-128'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-256'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-128'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-256'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v6'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v7'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='KnightsMill'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4fmaps'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4vnniw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512er'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512pf'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='KnightsMill-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4fmaps'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4vnniw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512er'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512pf'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G4-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tbm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G5-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tbm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='athlon'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='athlon-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='core2duo'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='core2duo-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='coreduo'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='coreduo-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='n270'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='n270-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='phenom'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='phenom-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </cpu>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <memoryBacking supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <enum name='sourceType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>file</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>anonymous</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>memfd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </memoryBacking>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <devices>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <disk supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='diskDevice'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>disk</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>cdrom</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>floppy</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>lun</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='bus'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>fdc</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>scsi</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>usb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>sata</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-non-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </disk>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <graphics supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vnc</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>egl-headless</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>dbus</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </graphics>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <video supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='modelType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vga</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>cirrus</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>none</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>bochs</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>ramfb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </video>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <hostdev supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='mode'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>subsystem</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='startupPolicy'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>default</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>mandatory</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>requisite</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>optional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='subsysType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>usb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pci</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>scsi</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='capsType'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='pciBackend'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </hostdev>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <rng supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-non-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendModel'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>random</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>egd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>builtin</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </rng>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <filesystem supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='driverType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>path</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>handle</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtiofs</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </filesystem>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <tpm supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tpm-tis</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tpm-crb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendModel'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>emulator</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>external</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendVersion'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>2.0</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </tpm>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <redirdev supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='bus'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>usb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </redirdev>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <channel supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pty</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>unix</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </channel>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <crypto supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>qemu</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendModel'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>builtin</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </crypto>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <interface supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>default</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>passt</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </interface>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <panic supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>isa</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>hyperv</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </panic>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <console supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>null</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vc</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pty</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>dev</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>file</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pipe</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>stdio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>udp</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tcp</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>unix</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>qemu-vdagent</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>dbus</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </console>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </devices>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <gic supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <vmcoreinfo supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <genid supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <backingStoreInput supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <backup supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <async-teardown supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <s390-pv supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <ps2 supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <tdx supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <sev supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <sgx supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <hyperv supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='features'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>relaxed</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vapic</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>spinlocks</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vpindex</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>runtime</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>synic</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>stimer</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>reset</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vendor_id</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>frequencies</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>reenlightenment</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tlbflush</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>ipi</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>avic</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>emsr_bitmap</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>xmm_input</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <defaults>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <spinlocks>4095</spinlocks>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <stimer_direct>on</stimer_direct>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <tlbflush_direct>on</tlbflush_direct>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <tlbflush_extended>on</tlbflush_extended>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </defaults>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </hyperv>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <launchSecurity supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]: </domainCapabilities>
Feb  1 10:06:41 np0005604375 nova_compute[238794]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.265 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.271 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb  1 10:06:41 np0005604375 nova_compute[238794]: <domainCapabilities>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <path>/usr/libexec/qemu-kvm</path>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <domain>kvm</domain>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <arch>x86_64</arch>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <vcpu max='240'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <iothreads supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <os supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <enum name='firmware'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <loader supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>rom</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pflash</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='readonly'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>yes</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>no</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='secure'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>no</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </loader>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </os>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <cpu>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='host-passthrough' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='hostPassthroughMigratable'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>on</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>off</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='maximum' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='maximumMigratable'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>on</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>off</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='host-model' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <vendor>AMD</vendor>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='x2apic'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='tsc-deadline'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='hypervisor'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='tsc_adjust'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='spec-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='stibp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='cmp_legacy'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='overflow-recov'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='succor'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='amd-ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='virt-ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='lbrv'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='tsc-scale'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='vmcb-clean'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='flushbyasid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='pause-filter'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='pfthreshold'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='svme-addr-chk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='disable' name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='custom' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='ClearwaterForest'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ddpd-u'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sha512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm3'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='ClearwaterForest-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ddpd-u'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sha512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm3'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cooperlake'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cooperlake-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cooperlake-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Dhyana-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Genoa'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Genoa-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Genoa-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fs-gs-base-ns'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='perfmon-v2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Turin'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vp2intersect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fs-gs-base-ns'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibpb-brtype'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='perfmon-v2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbpb'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='srso-user-kernel-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Turin-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vp2intersect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fs-gs-base-ns'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibpb-brtype'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='perfmon-v2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbpb'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='srso-user-kernel-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-128'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-256'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-128'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-256'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v6'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v7'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='KnightsMill'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4fmaps'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4vnniw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512er'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512pf'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='KnightsMill-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4fmaps'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4vnniw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512er'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512pf'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G4-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tbm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G5-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tbm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='athlon'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='athlon-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='core2duo'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='core2duo-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='coreduo'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='coreduo-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='n270'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='n270-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='phenom'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='phenom-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </cpu>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <memoryBacking supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <enum name='sourceType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>file</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>anonymous</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>memfd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </memoryBacking>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <devices>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <disk supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='diskDevice'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>disk</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>cdrom</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>floppy</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>lun</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='bus'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>ide</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>fdc</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>scsi</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>usb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>sata</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-non-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </disk>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <graphics supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vnc</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>egl-headless</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>dbus</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </graphics>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <video supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='modelType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vga</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>cirrus</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>none</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>bochs</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>ramfb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </video>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <hostdev supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='mode'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>subsystem</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='startupPolicy'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>default</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>mandatory</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>requisite</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>optional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='subsysType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>usb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pci</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>scsi</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='capsType'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='pciBackend'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </hostdev>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <rng supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-non-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendModel'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>random</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>egd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>builtin</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </rng>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <filesystem supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='driverType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>path</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>handle</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtiofs</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </filesystem>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <tpm supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tpm-tis</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tpm-crb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendModel'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>emulator</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>external</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendVersion'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>2.0</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </tpm>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <redirdev supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='bus'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>usb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </redirdev>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <channel supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pty</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>unix</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </channel>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <crypto supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>qemu</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendModel'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>builtin</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </crypto>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <interface supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>default</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>passt</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </interface>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <panic supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>isa</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>hyperv</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </panic>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <console supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>null</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vc</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pty</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>dev</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>file</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pipe</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>stdio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>udp</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tcp</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>unix</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>qemu-vdagent</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>dbus</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </console>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </devices>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <gic supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <vmcoreinfo supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <genid supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <backingStoreInput supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <backup supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <async-teardown supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <s390-pv supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <ps2 supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <tdx supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <sev supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <sgx supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <hyperv supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='features'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>relaxed</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vapic</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>spinlocks</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vpindex</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>runtime</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>synic</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>stimer</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>reset</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vendor_id</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>frequencies</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>reenlightenment</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tlbflush</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>ipi</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>avic</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>emsr_bitmap</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>xmm_input</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <defaults>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <spinlocks>4095</spinlocks>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <stimer_direct>on</stimer_direct>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <tlbflush_direct>on</tlbflush_direct>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <tlbflush_extended>on</tlbflush_extended>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </defaults>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </hyperv>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <launchSecurity supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]: </domainCapabilities>
Feb  1 10:06:41 np0005604375 nova_compute[238794]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.348 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb  1 10:06:41 np0005604375 nova_compute[238794]: <domainCapabilities>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <path>/usr/libexec/qemu-kvm</path>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <domain>kvm</domain>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <machine>pc-q35-rhel9.8.0</machine>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <arch>x86_64</arch>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <vcpu max='4096'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <iothreads supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <os supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <enum name='firmware'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>efi</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <loader supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>rom</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pflash</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='readonly'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>yes</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>no</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='secure'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>yes</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>no</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </loader>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </os>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <cpu>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='host-passthrough' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='hostPassthroughMigratable'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>on</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>off</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='maximum' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='maximumMigratable'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>on</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>off</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='host-model' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <vendor>AMD</vendor>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='x2apic'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='tsc-deadline'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='hypervisor'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='tsc_adjust'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='spec-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='stibp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='cmp_legacy'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='overflow-recov'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='succor'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='amd-ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='virt-ssbd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='lbrv'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='tsc-scale'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='vmcb-clean'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='flushbyasid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='pause-filter'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='pfthreshold'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='svme-addr-chk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <feature policy='disable' name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <mode name='custom' supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Broadwell-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cascadelake-Server-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='ClearwaterForest'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ddpd-u'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sha512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm3'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='ClearwaterForest-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ddpd-u'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sha512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm3'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sm4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cooperlake'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cooperlake-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Cooperlake-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Denverton-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Dhyana-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Genoa'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Genoa-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Genoa-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fs-gs-base-ns'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='perfmon-v2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Milan-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Rome-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Turin'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vp2intersect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fs-gs-base-ns'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibpb-brtype'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='perfmon-v2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbpb'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='srso-user-kernel-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-Turin-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amd-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='auto-ibrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vp2intersect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fs-gs-base-ns'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibpb-brtype'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='no-nested-data-bp'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='null-sel-clr-base'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='perfmon-v2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbpb'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='srso-user-kernel-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='stibp-always-on'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='EPYC-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-128'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-256'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='GraniteRapids-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-128'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-256'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx10-512'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='prefetchiti'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Haswell-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-noTSX'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v6'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Icelake-Server-v7'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='IvyBridge-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='KnightsMill'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4fmaps'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4vnniw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512er'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512pf'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='KnightsMill-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4fmaps'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-4vnniw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512er'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512pf'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G4-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tbm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Opteron_G5-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fma4'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tbm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xop'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SapphireRapids-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='amx-tile'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-bf16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-fp16'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512-vpopcntdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bitalg'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vbmi2'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrc'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fzrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='la57'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='taa-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='tsx-ldtrk'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='SierraForest-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ifma'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-ne-convert'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx-vnni-int8'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bhi-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='bus-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cmpccxadd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fbsdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='fsrs'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ibrs-all'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='intel-psfd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ipred-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='lam'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mcdt-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pbrsb-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='psdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rrsba-ctrl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='sbdr-ssdp-no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='serialize'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vaes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='vpclmulqdq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Client-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='hle'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='rtm'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Skylake-Server-v5'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512bw'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512cd'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512dq'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512f'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='avx512vl'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='invpcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pcid'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='pku'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='mpx'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v2'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v3'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='core-capability'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='split-lock-detect'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='Snowridge-v4'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='cldemote'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='erms'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='gfni'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdir64b'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='movdiri'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='xsaves'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='athlon'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='athlon-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='core2duo'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='core2duo-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='coreduo'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='coreduo-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='n270'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='n270-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='ss'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='phenom'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <blockers model='phenom-v1'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnow'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <feature name='3dnowext'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </blockers>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </mode>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </cpu>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <memoryBacking supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <enum name='sourceType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>file</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>anonymous</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <value>memfd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </memoryBacking>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <devices>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <disk supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='diskDevice'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>disk</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>cdrom</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>floppy</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>lun</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='bus'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>fdc</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>scsi</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>usb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>sata</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-non-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </disk>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <graphics supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vnc</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>egl-headless</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>dbus</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </graphics>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <video supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='modelType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vga</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>cirrus</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>none</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>bochs</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>ramfb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </video>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <hostdev supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='mode'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>subsystem</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='startupPolicy'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>default</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>mandatory</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>requisite</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>optional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='subsysType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>usb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pci</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>scsi</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='capsType'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='pciBackend'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </hostdev>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <rng supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtio-non-transitional</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendModel'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>random</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>egd</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>builtin</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </rng>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <filesystem supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='driverType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>path</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>handle</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>virtiofs</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </filesystem>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <tpm supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tpm-tis</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tpm-crb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendModel'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>emulator</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>external</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendVersion'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>2.0</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </tpm>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <redirdev supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='bus'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>usb</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </redirdev>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <channel supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pty</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>unix</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </channel>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <crypto supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>qemu</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendModel'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>builtin</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </crypto>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <interface supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='backendType'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>default</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>passt</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </interface>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <panic supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='model'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>isa</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>hyperv</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </panic>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <console supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='type'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>null</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vc</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pty</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>dev</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>file</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>pipe</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>stdio</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>udp</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tcp</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>unix</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>qemu-vdagent</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>dbus</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </console>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </devices>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  <features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <gic supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <vmcoreinfo supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <genid supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <backingStoreInput supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <backup supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <async-teardown supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <s390-pv supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <ps2 supported='yes'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <tdx supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <sev supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <sgx supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <hyperv supported='yes'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <enum name='features'>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>relaxed</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vapic</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>spinlocks</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vpindex</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>runtime</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>synic</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>stimer</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>reset</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>vendor_id</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>frequencies</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>reenlightenment</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>tlbflush</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>ipi</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>avic</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>emsr_bitmap</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <value>xmm_input</value>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </enum>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      <defaults>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <spinlocks>4095</spinlocks>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <stimer_direct>on</stimer_direct>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <tlbflush_direct>on</tlbflush_direct>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <tlbflush_extended>on</tlbflush_extended>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:      </defaults>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    </hyperv>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:    <launchSecurity supported='no'/>
Feb  1 10:06:41 np0005604375 nova_compute[238794]:  </features>
Feb  1 10:06:41 np0005604375 nova_compute[238794]: </domainCapabilities>
Feb  1 10:06:41 np0005604375 nova_compute[238794]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.442 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.443 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.443 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.452 238798 INFO nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Secure Boot support detected#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.455 238798 INFO nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.455 238798 INFO nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.463 238798 DEBUG nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.573 238798 INFO nova.virt.node [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Determined node identity 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 from /var/lib/nova/compute_id#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.599 238798 WARNING nova.compute.manager [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Compute nodes ['1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.641 238798 INFO nova.compute.manager [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.687 238798 WARNING nova.compute.manager [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.687 238798 DEBUG oslo_concurrency.lockutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.687 238798 DEBUG oslo_concurrency.lockutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.687 238798 DEBUG oslo_concurrency.lockutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.688 238798 DEBUG nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:06:41 np0005604375 nova_compute[238794]: 2026-02-01 15:06:41.688 238798 DEBUG oslo_concurrency.processutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:06:41 np0005604375 podman[239115]: 2026-02-01 15:06:41.970269324 +0000 UTC m=+0.057646717 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  1 10:06:42 np0005604375 podman[239116]: 2026-02-01 15:06:42.033027233 +0000 UTC m=+0.116053794 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb  1 10:06:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:06:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3094871060' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:06:42 np0005604375 nova_compute[238794]: 2026-02-01 15:06:42.228 238798 DEBUG oslo_concurrency.processutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:06:42 np0005604375 systemd[1]: Starting libvirt nodedev daemon...
Feb  1 10:06:42 np0005604375 systemd[1]: Started libvirt nodedev daemon.
Feb  1 10:06:42 np0005604375 nova_compute[238794]: 2026-02-01 15:06:42.537 238798 WARNING nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:06:42 np0005604375 nova_compute[238794]: 2026-02-01 15:06:42.539 238798 DEBUG nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5080MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:06:42 np0005604375 nova_compute[238794]: 2026-02-01 15:06:42.540 238798 DEBUG oslo_concurrency.lockutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:06:42 np0005604375 nova_compute[238794]: 2026-02-01 15:06:42.540 238798 DEBUG oslo_concurrency.lockutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:06:42 np0005604375 nova_compute[238794]: 2026-02-01 15:06:42.590 238798 WARNING nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] No compute node record for compute-0.ctlplane.example.com:1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 could not be found.#033[00m
Feb  1 10:06:42 np0005604375 nova_compute[238794]: 2026-02-01 15:06:42.627 238798 INFO nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18#033[00m
Feb  1 10:06:42 np0005604375 nova_compute[238794]: 2026-02-01 15:06:42.699 238798 DEBUG nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:06:42 np0005604375 nova_compute[238794]: 2026-02-01 15:06:42.699 238798 DEBUG nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:06:43 np0005604375 nova_compute[238794]: 2026-02-01 15:06:43.587 238798 INFO nova.scheduler.client.report [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] [req-3102e39e-46ff-4296-8902-516294c380d5] Created resource provider record via placement API for resource provider with UUID 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 and name compute-0.ctlplane.example.com.#033[00m
Feb  1 10:06:43 np0005604375 nova_compute[238794]: 2026-02-01 15:06:43.975 238798 DEBUG oslo_concurrency.processutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:06:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:06:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619802062' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:06:44 np0005604375 nova_compute[238794]: 2026-02-01 15:06:44.525 238798 DEBUG oslo_concurrency.processutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:06:44 np0005604375 nova_compute[238794]: 2026-02-01 15:06:44.530 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb  1 10:06:44 np0005604375 nova_compute[238794]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Feb  1 10:06:44 np0005604375 nova_compute[238794]: 2026-02-01 15:06:44.530 238798 INFO nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] kernel doesn't support AMD SEV#033[00m
Feb  1 10:06:44 np0005604375 nova_compute[238794]: 2026-02-01 15:06:44.531 238798 DEBUG nova.compute.provider_tree [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Updating inventory in ProviderTree for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  1 10:06:44 np0005604375 nova_compute[238794]: 2026-02-01 15:06:44.532 238798 DEBUG nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  1 10:06:44 np0005604375 nova_compute[238794]: 2026-02-01 15:06:44.615 238798 DEBUG nova.scheduler.client.report [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Updated inventory for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Feb  1 10:06:44 np0005604375 nova_compute[238794]: 2026-02-01 15:06:44.615 238798 DEBUG nova.compute.provider_tree [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Updating resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb  1 10:06:44 np0005604375 nova_compute[238794]: 2026-02-01 15:06:44.616 238798 DEBUG nova.compute.provider_tree [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Updating inventory in ProviderTree for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  1 10:06:44 np0005604375 nova_compute[238794]: 2026-02-01 15:06:44.781 238798 DEBUG nova.compute.provider_tree [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Updating resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb  1 10:06:44 np0005604375 nova_compute[238794]: 2026-02-01 15:06:44.811 238798 DEBUG nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:06:44 np0005604375 nova_compute[238794]: 2026-02-01 15:06:44.811 238798 DEBUG oslo_concurrency.lockutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:06:44 np0005604375 nova_compute[238794]: 2026-02-01 15:06:44.812 238798 DEBUG nova.service [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Feb  1 10:06:45 np0005604375 nova_compute[238794]: 2026-02-01 15:06:45.021 238798 DEBUG nova.service [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Feb  1 10:06:45 np0005604375 nova_compute[238794]: 2026-02-01 15:06:45.021 238798 DEBUG nova.servicegroup.drivers.db [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Feb  1 10:06:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:06:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:06:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:06:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:06:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:06:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:06:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:06:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:06:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:06:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:06:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:07:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3609046524' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:07:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3609046524' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:07:04 np0005604375 podman[239347]: 2026-02-01 15:07:04.052196397 +0000 UTC m=+0.043693416 container create 5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Feb  1 10:07:04 np0005604375 systemd[1]: Started libpod-conmon-5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0.scope.
Feb  1 10:07:04 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:07:04 np0005604375 podman[239347]: 2026-02-01 15:07:04.11610526 +0000 UTC m=+0.107602299 container init 5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curie, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:07:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  1 10:07:04 np0005604375 podman[239347]: 2026-02-01 15:07:04.121217983 +0000 UTC m=+0.112715002 container start 5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  1 10:07:04 np0005604375 podman[239347]: 2026-02-01 15:07:04.123872878 +0000 UTC m=+0.115369937 container attach 5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb  1 10:07:04 np0005604375 wizardly_curie[239364]: 167 167
Feb  1 10:07:04 np0005604375 systemd[1]: libpod-5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0.scope: Deactivated successfully.
Feb  1 10:07:04 np0005604375 podman[239347]: 2026-02-01 15:07:04.030743306 +0000 UTC m=+0.022240365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:07:04 np0005604375 podman[239347]: 2026-02-01 15:07:04.125969407 +0000 UTC m=+0.117466406 container died 5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  1 10:07:04 np0005604375 systemd[1]: var-lib-containers-storage-overlay-13f38c7df6839b8799cd7c419f8378300138bdd745228325af0dbcb37adaaf7f-merged.mount: Deactivated successfully.
Feb  1 10:07:04 np0005604375 podman[239347]: 2026-02-01 15:07:04.165210807 +0000 UTC m=+0.156707826 container remove 5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  1 10:07:04 np0005604375 systemd[1]: libpod-conmon-5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0.scope: Deactivated successfully.
Feb  1 10:07:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:07:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:07:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:07:04 np0005604375 podman[239389]: 2026-02-01 15:07:04.297608981 +0000 UTC m=+0.049655724 container create 15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_keldysh, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:07:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:07:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3970080740' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:07:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:07:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3970080740' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:07:04 np0005604375 systemd[1]: Started libpod-conmon-15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e.scope.
Feb  1 10:07:04 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:07:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d280a0ee537bd9a08742f3409145e2a0795b27fc81bd17f7732765ed3e6bdcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:07:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d280a0ee537bd9a08742f3409145e2a0795b27fc81bd17f7732765ed3e6bdcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:07:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d280a0ee537bd9a08742f3409145e2a0795b27fc81bd17f7732765ed3e6bdcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:07:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d280a0ee537bd9a08742f3409145e2a0795b27fc81bd17f7732765ed3e6bdcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:07:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d280a0ee537bd9a08742f3409145e2a0795b27fc81bd17f7732765ed3e6bdcb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:07:04 np0005604375 podman[239389]: 2026-02-01 15:07:04.280742048 +0000 UTC m=+0.032788821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:07:04 np0005604375 podman[239389]: 2026-02-01 15:07:04.376780612 +0000 UTC m=+0.128827365 container init 15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_keldysh, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  1 10:07:04 np0005604375 podman[239389]: 2026-02-01 15:07:04.385327702 +0000 UTC m=+0.137374435 container start 15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_keldysh, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  1 10:07:04 np0005604375 podman[239389]: 2026-02-01 15:07:04.388936683 +0000 UTC m=+0.140983446 container attach 15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:07:04 np0005604375 intelligent_keldysh[239406]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:07:04 np0005604375 intelligent_keldysh[239406]: --> All data devices are unavailable
Feb  1 10:07:04 np0005604375 systemd[1]: libpod-15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e.scope: Deactivated successfully.
Feb  1 10:07:04 np0005604375 podman[239389]: 2026-02-01 15:07:04.814984254 +0000 UTC m=+0.567031017 container died 15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_keldysh, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:07:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:07:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1644827985' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:07:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:07:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1644827985' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:07:04 np0005604375 systemd[1]: var-lib-containers-storage-overlay-9d280a0ee537bd9a08742f3409145e2a0795b27fc81bd17f7732765ed3e6bdcb-merged.mount: Deactivated successfully.
Feb  1 10:07:04 np0005604375 podman[239389]: 2026-02-01 15:07:04.860744397 +0000 UTC m=+0.612791130 container remove 15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_keldysh, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:07:04 np0005604375 systemd[1]: libpod-conmon-15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e.scope: Deactivated successfully.
Feb  1 10:07:05 np0005604375 podman[239500]: 2026-02-01 15:07:05.275037999 +0000 UTC m=+0.035467396 container create 8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  1 10:07:05 np0005604375 systemd[1]: Started libpod-conmon-8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44.scope.
Feb  1 10:07:05 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:07:05 np0005604375 podman[239500]: 2026-02-01 15:07:05.323163038 +0000 UTC m=+0.083592435 container init 8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:07:05 np0005604375 podman[239500]: 2026-02-01 15:07:05.326875983 +0000 UTC m=+0.087305360 container start 8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  1 10:07:05 np0005604375 pensive_bell[239515]: 167 167
Feb  1 10:07:05 np0005604375 systemd[1]: libpod-8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44.scope: Deactivated successfully.
Feb  1 10:07:05 np0005604375 podman[239500]: 2026-02-01 15:07:05.331252225 +0000 UTC m=+0.091681602 container attach 8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  1 10:07:05 np0005604375 podman[239500]: 2026-02-01 15:07:05.331721559 +0000 UTC m=+0.092150936 container died 8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  1 10:07:05 np0005604375 systemd[1]: var-lib-containers-storage-overlay-651c9af229070a734b8a9a07e6d86adb0445919b7cdb9356b9aa9be9904f67e0-merged.mount: Deactivated successfully.
Feb  1 10:07:05 np0005604375 podman[239500]: 2026-02-01 15:07:05.257852306 +0000 UTC m=+0.018281743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:07:05 np0005604375 podman[239500]: 2026-02-01 15:07:05.363107699 +0000 UTC m=+0.123537076 container remove 8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:07:05 np0005604375 systemd[1]: libpod-conmon-8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44.scope: Deactivated successfully.
Feb  1 10:07:05 np0005604375 podman[239541]: 2026-02-01 15:07:05.472356963 +0000 UTC m=+0.038964294 container create 66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mirzakhani, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  1 10:07:05 np0005604375 systemd[1]: Started libpod-conmon-66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5.scope.
Feb  1 10:07:05 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:07:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a2062d78384babf35fb53088ec3ba52dd38469674056844df313f18193df2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:07:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a2062d78384babf35fb53088ec3ba52dd38469674056844df313f18193df2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:07:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a2062d78384babf35fb53088ec3ba52dd38469674056844df313f18193df2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:07:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a2062d78384babf35fb53088ec3ba52dd38469674056844df313f18193df2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:07:05 np0005604375 podman[239541]: 2026-02-01 15:07:05.456481958 +0000 UTC m=+0.023089309 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:07:05 np0005604375 podman[239541]: 2026-02-01 15:07:05.558360296 +0000 UTC m=+0.124967647 container init 66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:07:05 np0005604375 podman[239541]: 2026-02-01 15:07:05.562653116 +0000 UTC m=+0.129260477 container start 66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mirzakhani, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:07:05 np0005604375 podman[239541]: 2026-02-01 15:07:05.56599599 +0000 UTC m=+0.132603411 container attach 66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]: {
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:    "0": [
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:        {
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "devices": [
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "/dev/loop3"
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            ],
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_name": "ceph_lv0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_size": "21470642176",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "name": "ceph_lv0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "tags": {
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.cluster_name": "ceph",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.crush_device_class": "",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.encrypted": "0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.objectstore": "bluestore",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.osd_id": "0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.type": "block",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.vdo": "0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.with_tpm": "0"
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            },
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "type": "block",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "vg_name": "ceph_vg0"
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:        }
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:    ],
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:    "1": [
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:        {
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "devices": [
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "/dev/loop4"
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            ],
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_name": "ceph_lv1",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_size": "21470642176",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "name": "ceph_lv1",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "tags": {
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.cluster_name": "ceph",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.crush_device_class": "",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.encrypted": "0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.objectstore": "bluestore",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.osd_id": "1",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.type": "block",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.vdo": "0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.with_tpm": "0"
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            },
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "type": "block",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "vg_name": "ceph_vg1"
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:        }
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:    ],
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:    "2": [
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:        {
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "devices": [
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "/dev/loop5"
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            ],
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_name": "ceph_lv2",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_size": "21470642176",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "name": "ceph_lv2",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "tags": {
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.cluster_name": "ceph",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.crush_device_class": "",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.encrypted": "0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.objectstore": "bluestore",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.osd_id": "2",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.type": "block",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.vdo": "0",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:                "ceph.with_tpm": "0"
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            },
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "type": "block",
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:            "vg_name": "ceph_vg2"
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:        }
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]:    ]
Feb  1 10:07:05 np0005604375 angry_mirzakhani[239558]: }
Feb  1 10:07:05 np0005604375 systemd[1]: libpod-66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5.scope: Deactivated successfully.
Feb  1 10:07:05 np0005604375 podman[239541]: 2026-02-01 15:07:05.818171043 +0000 UTC m=+0.384778414 container died 66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mirzakhani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  1 10:07:05 np0005604375 systemd[1]: var-lib-containers-storage-overlay-47a2062d78384babf35fb53088ec3ba52dd38469674056844df313f18193df2e-merged.mount: Deactivated successfully.
Feb  1 10:07:05 np0005604375 podman[239541]: 2026-02-01 15:07:05.861704344 +0000 UTC m=+0.428311685 container remove 66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mirzakhani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  1 10:07:05 np0005604375 systemd[1]: libpod-conmon-66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5.scope: Deactivated successfully.
Feb  1 10:07:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:07:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  1 10:07:06 np0005604375 podman[239641]: 2026-02-01 15:07:06.295680697 +0000 UTC m=+0.048270835 container create 744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_taussig, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Feb  1 10:07:06 np0005604375 systemd[1]: Started libpod-conmon-744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536.scope.
Feb  1 10:07:06 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:07:06 np0005604375 podman[239641]: 2026-02-01 15:07:06.355972128 +0000 UTC m=+0.108562346 container init 744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  1 10:07:06 np0005604375 podman[239641]: 2026-02-01 15:07:06.361984557 +0000 UTC m=+0.114574685 container start 744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 10:07:06 np0005604375 systemd[1]: libpod-744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536.scope: Deactivated successfully.
Feb  1 10:07:06 np0005604375 podman[239641]: 2026-02-01 15:07:06.365624119 +0000 UTC m=+0.118214337 container attach 744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  1 10:07:06 np0005604375 nervous_taussig[239657]: 167 167
Feb  1 10:07:06 np0005604375 conmon[239657]: conmon 744627a74af973283a9f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536.scope/container/memory.events
Feb  1 10:07:06 np0005604375 podman[239641]: 2026-02-01 15:07:06.366452362 +0000 UTC m=+0.119042520 container died 744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:07:06 np0005604375 podman[239641]: 2026-02-01 15:07:06.279921415 +0000 UTC m=+0.032511593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:07:06 np0005604375 systemd[1]: var-lib-containers-storage-overlay-3148389af57955aee4f77f4b6301a9e1052bb033b92f0e8c5564aad0d9452867-merged.mount: Deactivated successfully.
Feb  1 10:07:06 np0005604375 podman[239641]: 2026-02-01 15:07:06.399007275 +0000 UTC m=+0.151597433 container remove 744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_taussig, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:07:06 np0005604375 systemd[1]: libpod-conmon-744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536.scope: Deactivated successfully.
Feb  1 10:07:06 np0005604375 podman[239681]: 2026-02-01 15:07:06.559130217 +0000 UTC m=+0.041756873 container create aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  1 10:07:06 np0005604375 systemd[1]: Started libpod-conmon-aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2.scope.
Feb  1 10:07:06 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:07:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3695cd5475451543f85870988206d9a1b488b329aa39b7fedd994cebced08287/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:07:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3695cd5475451543f85870988206d9a1b488b329aa39b7fedd994cebced08287/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:07:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3695cd5475451543f85870988206d9a1b488b329aa39b7fedd994cebced08287/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:07:06 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3695cd5475451543f85870988206d9a1b488b329aa39b7fedd994cebced08287/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:07:06 np0005604375 podman[239681]: 2026-02-01 15:07:06.539830555 +0000 UTC m=+0.022457241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:07:06 np0005604375 podman[239681]: 2026-02-01 15:07:06.646827937 +0000 UTC m=+0.129454603 container init aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yalow, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Feb  1 10:07:06 np0005604375 podman[239681]: 2026-02-01 15:07:06.651998692 +0000 UTC m=+0.134625338 container start aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yalow, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:07:06 np0005604375 podman[239681]: 2026-02-01 15:07:06.655183581 +0000 UTC m=+0.137810227 container attach aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  1 10:07:07 np0005604375 lvm[239776]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:07:07 np0005604375 lvm[239775]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:07:07 np0005604375 lvm[239775]: VG ceph_vg0 finished
Feb  1 10:07:07 np0005604375 lvm[239776]: VG ceph_vg1 finished
Feb  1 10:07:07 np0005604375 lvm[239778]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:07:07 np0005604375 lvm[239778]: VG ceph_vg2 finished
Feb  1 10:07:07 np0005604375 hardcore_yalow[239697]: {}
Feb  1 10:07:07 np0005604375 systemd[1]: libpod-aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2.scope: Deactivated successfully.
Feb  1 10:07:07 np0005604375 podman[239681]: 2026-02-01 15:07:07.385462926 +0000 UTC m=+0.868089572 container died aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yalow, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  1 10:07:07 np0005604375 systemd[1]: libpod-aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2.scope: Consumed 1.011s CPU time.
Feb  1 10:07:07 np0005604375 systemd[1]: var-lib-containers-storage-overlay-3695cd5475451543f85870988206d9a1b488b329aa39b7fedd994cebced08287-merged.mount: Deactivated successfully.
Feb  1 10:07:07 np0005604375 podman[239681]: 2026-02-01 15:07:07.422263928 +0000 UTC m=+0.904890574 container remove aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:07:07 np0005604375 systemd[1]: libpod-conmon-aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2.scope: Deactivated successfully.
Feb  1 10:07:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:07:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:07:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:07:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:07:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:07:07.801 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:07:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:07:07.802 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:07:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:07:07.802 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:07:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  1 10:07:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:07:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:07:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  1 10:07:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:07:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  1 10:07:12 np0005604375 podman[239818]: 2026-02-01 15:07:12.988374726 +0000 UTC m=+0.056978438 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Feb  1 10:07:13 np0005604375 podman[239819]: 2026-02-01 15:07:13.013067339 +0000 UTC m=+0.086410614 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb  1 10:07:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:07:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:07:17
Feb  1 10:07:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:07:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:07:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'vms', 'default.rgw.meta', 'default.rgw.log']
Feb  1 10:07:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:07:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:07:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:07:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:07:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:07:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:07:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:07:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:07:36 np0005604375 nova_compute[238794]: 2026-02-01 15:07:36.024 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.026282) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958456026356, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1871, "num_deletes": 251, "total_data_size": 3203739, "memory_usage": 3250424, "flush_reason": "Manual Compaction"}
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958456036383, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1802269, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11744, "largest_seqno": 13614, "table_properties": {"data_size": 1796203, "index_size": 3077, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15257, "raw_average_key_size": 20, "raw_value_size": 1782777, "raw_average_value_size": 2358, "num_data_blocks": 142, "num_entries": 756, "num_filter_entries": 756, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958242, "oldest_key_time": 1769958242, "file_creation_time": 1769958456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 10123 microseconds, and 4946 cpu microseconds.
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.036433) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1802269 bytes OK
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.036455) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.038015) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.038038) EVENT_LOG_v1 {"time_micros": 1769958456038031, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.038059) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3195839, prev total WAL file size 3195839, number of live WAL files 2.
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.038883) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1760KB)], [29(7862KB)]
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958456038959, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9853173, "oldest_snapshot_seqno": -1}
Feb  1 10:07:36 np0005604375 nova_compute[238794]: 2026-02-01 15:07:36.052 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4044 keys, 7842936 bytes, temperature: kUnknown
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958456087968, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7842936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7813932, "index_size": 17822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 96088, "raw_average_key_size": 23, "raw_value_size": 7739041, "raw_average_value_size": 1913, "num_data_blocks": 777, "num_entries": 4044, "num_filter_entries": 4044, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.088385) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7842936 bytes
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.089757) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.6 rd, 159.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.7 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(9.8) write-amplify(4.4) OK, records in: 4457, records dropped: 413 output_compression: NoCompression
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.089789) EVENT_LOG_v1 {"time_micros": 1769958456089773, "job": 12, "event": "compaction_finished", "compaction_time_micros": 49117, "compaction_time_cpu_micros": 19488, "output_level": 6, "num_output_files": 1, "total_output_size": 7842936, "num_input_records": 4457, "num_output_records": 4044, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958456090213, "job": 12, "event": "table_file_deletion", "file_number": 31}
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958456091716, "job": 12, "event": "table_file_deletion", "file_number": 29}
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.038783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.091778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.091786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.091790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.091793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:07:36 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.091796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:07:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.322 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.322 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.322 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.322 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.351 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.352 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.352 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.352 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.352 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.353 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.353 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.353 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.353 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.417 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.417 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.417 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.417 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.418 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:07:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:07:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587570910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:07:40 np0005604375 nova_compute[238794]: 2026-02-01 15:07:40.895 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:07:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:07:41 np0005604375 nova_compute[238794]: 2026-02-01 15:07:41.098 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:07:41 np0005604375 nova_compute[238794]: 2026-02-01 15:07:41.099 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5132MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:07:41 np0005604375 nova_compute[238794]: 2026-02-01 15:07:41.099 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:07:41 np0005604375 nova_compute[238794]: 2026-02-01 15:07:41.099 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:07:41 np0005604375 nova_compute[238794]: 2026-02-01 15:07:41.265 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:07:41 np0005604375 nova_compute[238794]: 2026-02-01 15:07:41.265 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:07:41 np0005604375 nova_compute[238794]: 2026-02-01 15:07:41.298 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:07:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:07:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3263713135' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:07:41 np0005604375 nova_compute[238794]: 2026-02-01 15:07:41.834 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:07:41 np0005604375 nova_compute[238794]: 2026-02-01 15:07:41.838 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:07:41 np0005604375 nova_compute[238794]: 2026-02-01 15:07:41.858 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:07:41 np0005604375 nova_compute[238794]: 2026-02-01 15:07:41.859 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:07:41 np0005604375 nova_compute[238794]: 2026-02-01 15:07:41.860 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:07:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:43 np0005604375 podman[239907]: 2026-02-01 15:07:43.984963744 +0000 UTC m=+0.059286574 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb  1 10:07:44 np0005604375 podman[239908]: 2026-02-01 15:07:44.038359641 +0000 UTC m=+0.112960849 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb  1 10:07:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:07:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:07:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:07:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:07:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:07:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:07:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:07:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Feb  1 10:07:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3497808587' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb  1 10:07:50 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb  1 10:07:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb  1 10:07:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb  1 10:07:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:07:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:07:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:07:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:08:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:08:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Feb  1 10:08:06 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb  1 10:08:06 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb  1 10:08:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb  1 10:08:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb  1 10:08:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:08:07.802 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:08:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:08:07.803 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:08:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:08:07.803 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:08:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:08:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:08:08 np0005604375 podman[240096]: 2026-02-01 15:08:08.556505939 +0000 UTC m=+0.046478465 container create 5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 10:08:08 np0005604375 systemd[1]: Started libpod-conmon-5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355.scope.
Feb  1 10:08:08 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:08:08 np0005604375 podman[240096]: 2026-02-01 15:08:08.628473858 +0000 UTC m=+0.118446434 container init 5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mayer, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:08:08 np0005604375 podman[240096]: 2026-02-01 15:08:08.540611633 +0000 UTC m=+0.030584209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:08:08 np0005604375 podman[240096]: 2026-02-01 15:08:08.636846973 +0000 UTC m=+0.126819499 container start 5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mayer, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:08:08 np0005604375 podman[240096]: 2026-02-01 15:08:08.640410953 +0000 UTC m=+0.130383489 container attach 5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mayer, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:08:08 np0005604375 epic_mayer[240112]: 167 167
Feb  1 10:08:08 np0005604375 systemd[1]: libpod-5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355.scope: Deactivated successfully.
Feb  1 10:08:08 np0005604375 conmon[240112]: conmon 5a0b69d9c4317a76305b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355.scope/container/memory.events
Feb  1 10:08:08 np0005604375 podman[240096]: 2026-02-01 15:08:08.644160348 +0000 UTC m=+0.134132924 container died 5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mayer, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:08:08 np0005604375 systemd[1]: var-lib-containers-storage-overlay-50b44a5125852e65fabf5e833ac7e78cc6799ca58df5d0b78d9abcd6958b9975-merged.mount: Deactivated successfully.
Feb  1 10:08:08 np0005604375 podman[240096]: 2026-02-01 15:08:08.679357035 +0000 UTC m=+0.169329571 container remove 5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mayer, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  1 10:08:08 np0005604375 systemd[1]: libpod-conmon-5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355.scope: Deactivated successfully.
Feb  1 10:08:08 np0005604375 podman[240135]: 2026-02-01 15:08:08.805821663 +0000 UTC m=+0.033967194 container create 148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:08:08 np0005604375 systemd[1]: Started libpod-conmon-148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6.scope.
Feb  1 10:08:08 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:08:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad23ba75b5343420a4a222a00a4dd65633f8730ac2cd70b165464f280f1894/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:08:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad23ba75b5343420a4a222a00a4dd65633f8730ac2cd70b165464f280f1894/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:08:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad23ba75b5343420a4a222a00a4dd65633f8730ac2cd70b165464f280f1894/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:08:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad23ba75b5343420a4a222a00a4dd65633f8730ac2cd70b165464f280f1894/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:08:08 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad23ba75b5343420a4a222a00a4dd65633f8730ac2cd70b165464f280f1894/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:08:08 np0005604375 podman[240135]: 2026-02-01 15:08:08.789257538 +0000 UTC m=+0.017403109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:08:08 np0005604375 podman[240135]: 2026-02-01 15:08:08.899418988 +0000 UTC m=+0.127564579 container init 148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 10:08:08 np0005604375 podman[240135]: 2026-02-01 15:08:08.90840581 +0000 UTC m=+0.136551361 container start 148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ramanujan, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  1 10:08:08 np0005604375 podman[240135]: 2026-02-01 15:08:08.911745874 +0000 UTC m=+0.139891415 container attach 148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:08:09 np0005604375 inspiring_ramanujan[240151]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:08:09 np0005604375 inspiring_ramanujan[240151]: --> All data devices are unavailable
Feb  1 10:08:09 np0005604375 systemd[1]: libpod-148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6.scope: Deactivated successfully.
Feb  1 10:08:09 np0005604375 conmon[240151]: conmon 148706f96b960f2ab0f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6.scope/container/memory.events
Feb  1 10:08:09 np0005604375 podman[240135]: 2026-02-01 15:08:09.343266438 +0000 UTC m=+0.571411979 container died 148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:08:09 np0005604375 systemd[1]: var-lib-containers-storage-overlay-59ad23ba75b5343420a4a222a00a4dd65633f8730ac2cd70b165464f280f1894-merged.mount: Deactivated successfully.
Feb  1 10:08:09 np0005604375 podman[240135]: 2026-02-01 15:08:09.384929167 +0000 UTC m=+0.613074718 container remove 148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ramanujan, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:08:09 np0005604375 systemd[1]: libpod-conmon-148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6.scope: Deactivated successfully.
Feb  1 10:08:09 np0005604375 podman[240245]: 2026-02-01 15:08:09.791270515 +0000 UTC m=+0.040482007 container create a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  1 10:08:09 np0005604375 systemd[1]: Started libpod-conmon-a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0.scope.
Feb  1 10:08:09 np0005604375 podman[240245]: 2026-02-01 15:08:09.769949607 +0000 UTC m=+0.019161179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:08:09 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:08:09 np0005604375 podman[240245]: 2026-02-01 15:08:09.88272994 +0000 UTC m=+0.131941452 container init a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:08:09 np0005604375 podman[240245]: 2026-02-01 15:08:09.88985167 +0000 UTC m=+0.139063162 container start a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:08:09 np0005604375 podman[240245]: 2026-02-01 15:08:09.89307584 +0000 UTC m=+0.142287342 container attach a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  1 10:08:09 np0005604375 crazy_yonath[240261]: 167 167
Feb  1 10:08:09 np0005604375 systemd[1]: libpod-a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0.scope: Deactivated successfully.
Feb  1 10:08:09 np0005604375 podman[240245]: 2026-02-01 15:08:09.896911298 +0000 UTC m=+0.146122820 container died a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:08:09 np0005604375 systemd[1]: var-lib-containers-storage-overlay-7b4606808dd8ac03d7ff967ee016aef21301ef626310522739cec18bb0a34256-merged.mount: Deactivated successfully.
Feb  1 10:08:09 np0005604375 podman[240245]: 2026-02-01 15:08:09.981856491 +0000 UTC m=+0.231068013 container remove a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:08:09 np0005604375 systemd[1]: libpod-conmon-a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0.scope: Deactivated successfully.
Feb  1 10:08:10 np0005604375 podman[240287]: 2026-02-01 15:08:10.144916595 +0000 UTC m=+0.046789654 container create a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 10:08:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:10 np0005604375 systemd[1]: Started libpod-conmon-a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661.scope.
Feb  1 10:08:10 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:08:10 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a30f1969e0cc033215697d1b215a65c68475c4580a7013861bdc65b62865b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:08:10 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a30f1969e0cc033215697d1b215a65c68475c4580a7013861bdc65b62865b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:08:10 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a30f1969e0cc033215697d1b215a65c68475c4580a7013861bdc65b62865b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:08:10 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a30f1969e0cc033215697d1b215a65c68475c4580a7013861bdc65b62865b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:08:10 np0005604375 podman[240287]: 2026-02-01 15:08:10.210402341 +0000 UTC m=+0.112275410 container init a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mendeleev, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:08:10 np0005604375 podman[240287]: 2026-02-01 15:08:10.215402511 +0000 UTC m=+0.117275570 container start a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mendeleev, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  1 10:08:10 np0005604375 podman[240287]: 2026-02-01 15:08:10.219479125 +0000 UTC m=+0.121352174 container attach a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  1 10:08:10 np0005604375 podman[240287]: 2026-02-01 15:08:10.127865256 +0000 UTC m=+0.029738295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]: {
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:    "0": [
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:        {
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "devices": [
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "/dev/loop3"
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            ],
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_name": "ceph_lv0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_size": "21470642176",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "name": "ceph_lv0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "tags": {
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.cluster_name": "ceph",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.crush_device_class": "",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.encrypted": "0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.objectstore": "bluestore",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.osd_id": "0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.type": "block",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.vdo": "0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.with_tpm": "0"
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            },
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "type": "block",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "vg_name": "ceph_vg0"
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:        }
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:    ],
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:    "1": [
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:        {
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "devices": [
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "/dev/loop4"
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            ],
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_name": "ceph_lv1",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_size": "21470642176",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "name": "ceph_lv1",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "tags": {
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.cluster_name": "ceph",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.crush_device_class": "",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.encrypted": "0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.objectstore": "bluestore",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.osd_id": "1",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.type": "block",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.vdo": "0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.with_tpm": "0"
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            },
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "type": "block",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "vg_name": "ceph_vg1"
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:        }
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:    ],
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:    "2": [
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:        {
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "devices": [
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "/dev/loop5"
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            ],
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_name": "ceph_lv2",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_size": "21470642176",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "name": "ceph_lv2",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "tags": {
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.cluster_name": "ceph",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.crush_device_class": "",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.encrypted": "0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.objectstore": "bluestore",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.osd_id": "2",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.type": "block",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.vdo": "0",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:                "ceph.with_tpm": "0"
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            },
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "type": "block",
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:            "vg_name": "ceph_vg2"
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:        }
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]:    ]
Feb  1 10:08:10 np0005604375 jovial_mendeleev[240304]: }
Feb  1 10:08:10 np0005604375 systemd[1]: libpod-a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661.scope: Deactivated successfully.
Feb  1 10:08:10 np0005604375 podman[240287]: 2026-02-01 15:08:10.482488773 +0000 UTC m=+0.384361802 container died a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:08:10 np0005604375 systemd[1]: var-lib-containers-storage-overlay-cb5a30f1969e0cc033215697d1b215a65c68475c4580a7013861bdc65b62865b-merged.mount: Deactivated successfully.
Feb  1 10:08:10 np0005604375 podman[240287]: 2026-02-01 15:08:10.524493701 +0000 UTC m=+0.426366730 container remove a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mendeleev, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:08:10 np0005604375 systemd[1]: libpod-conmon-a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661.scope: Deactivated successfully.
Feb  1 10:08:10 np0005604375 podman[240386]: 2026-02-01 15:08:10.954584875 +0000 UTC m=+0.041316270 container create 3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  1 10:08:10 np0005604375 systemd[1]: Started libpod-conmon-3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886.scope.
Feb  1 10:08:11 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:08:11 np0005604375 podman[240386]: 2026-02-01 15:08:11.027062948 +0000 UTC m=+0.113794363 container init 3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  1 10:08:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:08:11 np0005604375 podman[240386]: 2026-02-01 15:08:10.934572574 +0000 UTC m=+0.021303959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:08:11 np0005604375 podman[240386]: 2026-02-01 15:08:11.037337166 +0000 UTC m=+0.124068531 container start 3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:08:11 np0005604375 friendly_engelbart[240402]: 167 167
Feb  1 10:08:11 np0005604375 systemd[1]: libpod-3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886.scope: Deactivated successfully.
Feb  1 10:08:11 np0005604375 podman[240386]: 2026-02-01 15:08:11.041577955 +0000 UTC m=+0.128309320 container attach 3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  1 10:08:11 np0005604375 podman[240386]: 2026-02-01 15:08:11.042027158 +0000 UTC m=+0.128758533 container died 3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:08:11 np0005604375 systemd[1]: var-lib-containers-storage-overlay-844b428d7218f8032635d9de88e00a8d7004080e89e32bd85ef9e06a34eed583-merged.mount: Deactivated successfully.
Feb  1 10:08:11 np0005604375 podman[240386]: 2026-02-01 15:08:11.07847434 +0000 UTC m=+0.165205715 container remove 3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  1 10:08:11 np0005604375 systemd[1]: libpod-conmon-3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886.scope: Deactivated successfully.
Feb  1 10:08:11 np0005604375 podman[240427]: 2026-02-01 15:08:11.269952211 +0000 UTC m=+0.079087769 container create d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:08:11 np0005604375 systemd[1]: Started libpod-conmon-d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31.scope.
Feb  1 10:08:11 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:08:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2459120fc8833d03d70ca178871f23d1b91df347876bd693e358677bfce28a3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:08:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2459120fc8833d03d70ca178871f23d1b91df347876bd693e358677bfce28a3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:08:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2459120fc8833d03d70ca178871f23d1b91df347876bd693e358677bfce28a3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:08:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2459120fc8833d03d70ca178871f23d1b91df347876bd693e358677bfce28a3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:08:11 np0005604375 podman[240427]: 2026-02-01 15:08:11.348612608 +0000 UTC m=+0.157748186 container init d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:08:11 np0005604375 podman[240427]: 2026-02-01 15:08:11.257579424 +0000 UTC m=+0.066715002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:08:11 np0005604375 podman[240427]: 2026-02-01 15:08:11.356775316 +0000 UTC m=+0.165910884 container start d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  1 10:08:11 np0005604375 podman[240427]: 2026-02-01 15:08:11.359867253 +0000 UTC m=+0.169002831 container attach d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  1 10:08:11 np0005604375 lvm[240522]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:08:11 np0005604375 lvm[240521]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:08:11 np0005604375 lvm[240521]: VG ceph_vg0 finished
Feb  1 10:08:11 np0005604375 lvm[240522]: VG ceph_vg1 finished
Feb  1 10:08:12 np0005604375 lvm[240524]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:08:12 np0005604375 lvm[240524]: VG ceph_vg2 finished
Feb  1 10:08:12 np0005604375 trusting_borg[240443]: {}
Feb  1 10:08:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:12 np0005604375 systemd[1]: libpod-d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31.scope: Deactivated successfully.
Feb  1 10:08:12 np0005604375 systemd[1]: libpod-d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31.scope: Consumed 1.111s CPU time.
Feb  1 10:08:12 np0005604375 podman[240427]: 2026-02-01 15:08:12.154548364 +0000 UTC m=+0.963683932 container died d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Feb  1 10:08:12 np0005604375 systemd[1]: var-lib-containers-storage-overlay-2459120fc8833d03d70ca178871f23d1b91df347876bd693e358677bfce28a3b-merged.mount: Deactivated successfully.
Feb  1 10:08:12 np0005604375 podman[240427]: 2026-02-01 15:08:12.19718325 +0000 UTC m=+1.006318808 container remove d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:08:12 np0005604375 systemd[1]: libpod-conmon-d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31.scope: Deactivated successfully.
Feb  1 10:08:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:08:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:08:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:08:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:08:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:08:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:08:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:14 np0005604375 podman[240564]: 2026-02-01 15:08:14.957853228 +0000 UTC m=+0.048281146 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb  1 10:08:15 np0005604375 podman[240565]: 2026-02-01 15:08:15.029971751 +0000 UTC m=+0.112180698 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb  1 10:08:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:08:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:08:17
Feb  1 10:08:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:08:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:08:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'vms', 'images', '.rgw.root']
Feb  1 10:08:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:08:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:08:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:08:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:08:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:08:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:08:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:08:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:08:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:08:41 np0005604375 nova_compute[238794]: 2026-02-01 15:08:41.853 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:08:41 np0005604375 nova_compute[238794]: 2026-02-01 15:08:41.853 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:08:41 np0005604375 nova_compute[238794]: 2026-02-01 15:08:41.879 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:08:41 np0005604375 nova_compute[238794]: 2026-02-01 15:08:41.880 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:08:41 np0005604375 nova_compute[238794]: 2026-02-01 15:08:41.880 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:08:41 np0005604375 nova_compute[238794]: 2026-02-01 15:08:41.880 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:08:41 np0005604375 nova_compute[238794]: 2026-02-01 15:08:41.912 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:08:41 np0005604375 nova_compute[238794]: 2026-02-01 15:08:41.913 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:08:41 np0005604375 nova_compute[238794]: 2026-02-01 15:08:41.913 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:08:41 np0005604375 nova_compute[238794]: 2026-02-01 15:08:41.913 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:08:41 np0005604375 nova_compute[238794]: 2026-02-01 15:08:41.913 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:08:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:08:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/350715595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:08:42 np0005604375 nova_compute[238794]: 2026-02-01 15:08:42.416 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:08:42 np0005604375 nova_compute[238794]: 2026-02-01 15:08:42.572 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:08:42 np0005604375 nova_compute[238794]: 2026-02-01 15:08:42.573 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5120MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:08:42 np0005604375 nova_compute[238794]: 2026-02-01 15:08:42.573 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:08:42 np0005604375 nova_compute[238794]: 2026-02-01 15:08:42.573 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:08:42 np0005604375 nova_compute[238794]: 2026-02-01 15:08:42.655 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:08:42 np0005604375 nova_compute[238794]: 2026-02-01 15:08:42.655 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:08:42 np0005604375 nova_compute[238794]: 2026-02-01 15:08:42.695 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:08:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:08:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3458587239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:08:43 np0005604375 nova_compute[238794]: 2026-02-01 15:08:43.192 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:08:43 np0005604375 nova_compute[238794]: 2026-02-01 15:08:43.196 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:08:43 np0005604375 nova_compute[238794]: 2026-02-01 15:08:43.225 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:08:43 np0005604375 nova_compute[238794]: 2026-02-01 15:08:43.227 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:08:43 np0005604375 nova_compute[238794]: 2026-02-01 15:08:43.227 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:08:43 np0005604375 nova_compute[238794]: 2026-02-01 15:08:43.667 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:08:43 np0005604375 nova_compute[238794]: 2026-02-01 15:08:43.668 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:08:43 np0005604375 nova_compute[238794]: 2026-02-01 15:08:43.668 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:08:43 np0005604375 nova_compute[238794]: 2026-02-01 15:08:43.691 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:08:43 np0005604375 nova_compute[238794]: 2026-02-01 15:08:43.691 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:08:43 np0005604375 nova_compute[238794]: 2026-02-01 15:08:43.692 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:08:43 np0005604375 nova_compute[238794]: 2026-02-01 15:08:43.692 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:08:43 np0005604375 nova_compute[238794]: 2026-02-01 15:08:43.692 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:08:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:45 np0005604375 podman[240651]: 2026-02-01 15:08:45.982487683 +0000 UTC m=+0.064291784 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Feb  1 10:08:46 np0005604375 podman[240652]: 2026-02-01 15:08:46.003487242 +0000 UTC m=+0.089198113 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  1 10:08:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:08:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:08:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:08:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:08:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:08:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:08:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:08:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:08:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774348116' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:08:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:08:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774348116' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:08:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:08:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:08:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:08:56 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:08:56.878 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  1 10:08:56 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:08:56.879 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  1 10:08:56 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:08:56.879 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  1 10:08:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:09:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:09:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:09:07.804 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:09:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:09:07.804 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:09:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:09:07.804 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:09:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:09:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:09:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:09:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:09:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:09:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:09:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:09:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:09:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:09:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:09:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:09:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:09:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:09:13 np0005604375 podman[240837]: 2026-02-01 15:09:13.241885923 +0000 UTC m=+0.037618915 container create e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:09:13 np0005604375 systemd[1]: Started libpod-conmon-e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f.scope.
Feb  1 10:09:13 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:09:13 np0005604375 podman[240837]: 2026-02-01 15:09:13.306550695 +0000 UTC m=+0.102283687 container init e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_banzai, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  1 10:09:13 np0005604375 podman[240837]: 2026-02-01 15:09:13.31103659 +0000 UTC m=+0.106769582 container start e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:09:13 np0005604375 podman[240837]: 2026-02-01 15:09:13.31388209 +0000 UTC m=+0.109615102 container attach e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_banzai, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:09:13 np0005604375 systemd[1]: libpod-e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f.scope: Deactivated successfully.
Feb  1 10:09:13 np0005604375 stoic_banzai[240853]: 167 167
Feb  1 10:09:13 np0005604375 conmon[240853]: conmon e2054249645289c7dd2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f.scope/container/memory.events
Feb  1 10:09:13 np0005604375 podman[240837]: 2026-02-01 15:09:13.315486045 +0000 UTC m=+0.111219037 container died e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  1 10:09:13 np0005604375 podman[240837]: 2026-02-01 15:09:13.227118459 +0000 UTC m=+0.022851451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:09:13 np0005604375 systemd[1]: var-lib-containers-storage-overlay-5fed91d021305d3fc4b41599531131e80cd5b89a58f8a31e35bf13f72672c97b-merged.mount: Deactivated successfully.
Feb  1 10:09:13 np0005604375 podman[240837]: 2026-02-01 15:09:13.349518888 +0000 UTC m=+0.145251880 container remove e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  1 10:09:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:09:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:09:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:09:13 np0005604375 systemd[1]: libpod-conmon-e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f.scope: Deactivated successfully.
Feb  1 10:09:13 np0005604375 podman[240878]: 2026-02-01 15:09:13.448381408 +0000 UTC m=+0.028017306 container create 95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  1 10:09:13 np0005604375 systemd[1]: Started libpod-conmon-95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d.scope.
Feb  1 10:09:13 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:09:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29eadcb77279691eb4281d4799759f9433dcba264d7a6e21f5f176dde8cc4ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:09:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29eadcb77279691eb4281d4799759f9433dcba264d7a6e21f5f176dde8cc4ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:09:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29eadcb77279691eb4281d4799759f9433dcba264d7a6e21f5f176dde8cc4ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:09:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29eadcb77279691eb4281d4799759f9433dcba264d7a6e21f5f176dde8cc4ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:09:13 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29eadcb77279691eb4281d4799759f9433dcba264d7a6e21f5f176dde8cc4ef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:09:13 np0005604375 podman[240878]: 2026-02-01 15:09:13.505546 +0000 UTC m=+0.085181928 container init 95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:09:13 np0005604375 podman[240878]: 2026-02-01 15:09:13.510329474 +0000 UTC m=+0.089965372 container start 95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:09:13 np0005604375 podman[240878]: 2026-02-01 15:09:13.513176373 +0000 UTC m=+0.092812291 container attach 95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  1 10:09:13 np0005604375 podman[240878]: 2026-02-01 15:09:13.437146933 +0000 UTC m=+0.016782851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:09:13 np0005604375 stoic_montalcini[240894]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:09:13 np0005604375 stoic_montalcini[240894]: --> All data devices are unavailable
Feb  1 10:09:13 np0005604375 systemd[1]: libpod-95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d.scope: Deactivated successfully.
Feb  1 10:09:13 np0005604375 podman[240878]: 2026-02-01 15:09:13.877002356 +0000 UTC m=+0.456638254 container died 95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  1 10:09:13 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f29eadcb77279691eb4281d4799759f9433dcba264d7a6e21f5f176dde8cc4ef-merged.mount: Deactivated successfully.
Feb  1 10:09:14 np0005604375 podman[240878]: 2026-02-01 15:09:14.102134814 +0000 UTC m=+0.681770712 container remove 95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  1 10:09:14 np0005604375 systemd[1]: libpod-conmon-95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d.scope: Deactivated successfully.
Feb  1 10:09:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:14 np0005604375 podman[240989]: 2026-02-01 15:09:14.457954042 +0000 UTC m=+0.037048909 container create c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:09:14 np0005604375 systemd[1]: Started libpod-conmon-c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f.scope.
Feb  1 10:09:14 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:09:14 np0005604375 podman[240989]: 2026-02-01 15:09:14.525878705 +0000 UTC m=+0.104973592 container init c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  1 10:09:14 np0005604375 podman[240989]: 2026-02-01 15:09:14.531127202 +0000 UTC m=+0.110222069 container start c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  1 10:09:14 np0005604375 systemd[1]: libpod-c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f.scope: Deactivated successfully.
Feb  1 10:09:14 np0005604375 distracted_sutherland[241005]: 167 167
Feb  1 10:09:14 np0005604375 conmon[241005]: conmon c89bfc085270b69c0c92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f.scope/container/memory.events
Feb  1 10:09:14 np0005604375 podman[240989]: 2026-02-01 15:09:14.441859591 +0000 UTC m=+0.020954558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:09:14 np0005604375 podman[240989]: 2026-02-01 15:09:14.537040958 +0000 UTC m=+0.116135845 container attach c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  1 10:09:14 np0005604375 podman[240989]: 2026-02-01 15:09:14.537399638 +0000 UTC m=+0.116494505 container died c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:09:14 np0005604375 systemd[1]: var-lib-containers-storage-overlay-bd37e92aea0f5151b20a0c477390d70db35fdf1778d0cd092d0db50c38801461-merged.mount: Deactivated successfully.
Feb  1 10:09:14 np0005604375 podman[240989]: 2026-02-01 15:09:14.613065408 +0000 UTC m=+0.192160275 container remove c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  1 10:09:14 np0005604375 systemd[1]: libpod-conmon-c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f.scope: Deactivated successfully.
Feb  1 10:09:14 np0005604375 podman[241031]: 2026-02-01 15:09:14.755047694 +0000 UTC m=+0.048240172 container create 5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  1 10:09:14 np0005604375 systemd[1]: Started libpod-conmon-5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405.scope.
Feb  1 10:09:14 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:09:14 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25c978d789b2ffb08c522b74b1c7f35e52c7d7f1417f62d6b0381172338138e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:09:14 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25c978d789b2ffb08c522b74b1c7f35e52c7d7f1417f62d6b0381172338138e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:09:14 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25c978d789b2ffb08c522b74b1c7f35e52c7d7f1417f62d6b0381172338138e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:09:14 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25c978d789b2ffb08c522b74b1c7f35e52c7d7f1417f62d6b0381172338138e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:09:14 np0005604375 podman[241031]: 2026-02-01 15:09:14.826371222 +0000 UTC m=+0.119563790 container init 5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  1 10:09:14 np0005604375 podman[241031]: 2026-02-01 15:09:14.834394427 +0000 UTC m=+0.127586905 container start 5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  1 10:09:14 np0005604375 podman[241031]: 2026-02-01 15:09:14.837549686 +0000 UTC m=+0.130742174 container attach 5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  1 10:09:14 np0005604375 podman[241031]: 2026-02-01 15:09:14.741311079 +0000 UTC m=+0.034503577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:09:15 np0005604375 jolly_turing[241047]: {
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:    "0": [
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:        {
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "devices": [
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "/dev/loop3"
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            ],
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_name": "ceph_lv0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_size": "21470642176",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "name": "ceph_lv0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "tags": {
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.cluster_name": "ceph",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.crush_device_class": "",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.encrypted": "0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.objectstore": "bluestore",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.osd_id": "0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.type": "block",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.vdo": "0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.with_tpm": "0"
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            },
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "type": "block",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "vg_name": "ceph_vg0"
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:        }
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:    ],
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:    "1": [
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:        {
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "devices": [
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "/dev/loop4"
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            ],
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_name": "ceph_lv1",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_size": "21470642176",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "name": "ceph_lv1",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "tags": {
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.cluster_name": "ceph",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.crush_device_class": "",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.encrypted": "0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.objectstore": "bluestore",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.osd_id": "1",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.type": "block",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.vdo": "0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.with_tpm": "0"
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            },
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "type": "block",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "vg_name": "ceph_vg1"
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:        }
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:    ],
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:    "2": [
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:        {
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "devices": [
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "/dev/loop5"
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            ],
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_name": "ceph_lv2",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_size": "21470642176",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "name": "ceph_lv2",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "tags": {
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.cluster_name": "ceph",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.crush_device_class": "",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.encrypted": "0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.objectstore": "bluestore",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.osd_id": "2",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.type": "block",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.vdo": "0",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:                "ceph.with_tpm": "0"
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            },
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "type": "block",
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:            "vg_name": "ceph_vg2"
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:        }
Feb  1 10:09:15 np0005604375 jolly_turing[241047]:    ]
Feb  1 10:09:15 np0005604375 jolly_turing[241047]: }
Feb  1 10:09:15 np0005604375 systemd[1]: libpod-5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405.scope: Deactivated successfully.
Feb  1 10:09:15 np0005604375 podman[241031]: 2026-02-01 15:09:15.093958609 +0000 UTC m=+0.387151087 container died 5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_turing, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:09:15 np0005604375 systemd[1]: var-lib-containers-storage-overlay-d25c978d789b2ffb08c522b74b1c7f35e52c7d7f1417f62d6b0381172338138e-merged.mount: Deactivated successfully.
Feb  1 10:09:15 np0005604375 podman[241031]: 2026-02-01 15:09:15.133570249 +0000 UTC m=+0.426762727 container remove 5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  1 10:09:15 np0005604375 systemd[1]: libpod-conmon-5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405.scope: Deactivated successfully.
Feb  1 10:09:15 np0005604375 podman[241130]: 2026-02-01 15:09:15.492159255 +0000 UTC m=+0.034655362 container create 870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:09:15 np0005604375 systemd[1]: Started libpod-conmon-870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454.scope.
Feb  1 10:09:15 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:09:15 np0005604375 podman[241130]: 2026-02-01 15:09:15.551798926 +0000 UTC m=+0.094295053 container init 870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  1 10:09:15 np0005604375 podman[241130]: 2026-02-01 15:09:15.558043111 +0000 UTC m=+0.100539218 container start 870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_einstein, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  1 10:09:15 np0005604375 podman[241130]: 2026-02-01 15:09:15.560597062 +0000 UTC m=+0.103093169 container attach 870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_einstein, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:09:15 np0005604375 vigorous_einstein[241146]: 167 167
Feb  1 10:09:15 np0005604375 systemd[1]: libpod-870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454.scope: Deactivated successfully.
Feb  1 10:09:15 np0005604375 podman[241130]: 2026-02-01 15:09:15.562119015 +0000 UTC m=+0.104615112 container died 870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  1 10:09:15 np0005604375 podman[241130]: 2026-02-01 15:09:15.478845782 +0000 UTC m=+0.021341899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:09:15 np0005604375 systemd[1]: var-lib-containers-storage-overlay-e11f84e653dd9bc3ba64162aafbd924897622b8f40b31c41f78b9b7cb54e2f2d-merged.mount: Deactivated successfully.
Feb  1 10:09:15 np0005604375 podman[241130]: 2026-02-01 15:09:15.595332895 +0000 UTC m=+0.137829002 container remove 870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_einstein, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  1 10:09:15 np0005604375 systemd[1]: libpod-conmon-870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454.scope: Deactivated successfully.
Feb  1 10:09:15 np0005604375 podman[241171]: 2026-02-01 15:09:15.740495272 +0000 UTC m=+0.045949268 container create d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:09:15 np0005604375 systemd[1]: Started libpod-conmon-d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76.scope.
Feb  1 10:09:15 np0005604375 podman[241171]: 2026-02-01 15:09:15.716245743 +0000 UTC m=+0.021699729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:09:15 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:09:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1da56caaf3deb125b751ac562ac0c72270af11a04ada87c00a2a7230b9263e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:09:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1da56caaf3deb125b751ac562ac0c72270af11a04ada87c00a2a7230b9263e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:09:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1da56caaf3deb125b751ac562ac0c72270af11a04ada87c00a2a7230b9263e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:09:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1da56caaf3deb125b751ac562ac0c72270af11a04ada87c00a2a7230b9263e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:09:15 np0005604375 podman[241171]: 2026-02-01 15:09:15.832597853 +0000 UTC m=+0.138051899 container init d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  1 10:09:15 np0005604375 podman[241171]: 2026-02-01 15:09:15.839901807 +0000 UTC m=+0.145355763 container start d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 10:09:15 np0005604375 podman[241171]: 2026-02-01 15:09:15.842991854 +0000 UTC m=+0.148445900 container attach d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ishizaka, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:09:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:09:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:16 np0005604375 lvm[241295]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:09:16 np0005604375 lvm[241296]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:09:16 np0005604375 lvm[241295]: VG ceph_vg1 finished
Feb  1 10:09:16 np0005604375 lvm[241296]: VG ceph_vg2 finished
Feb  1 10:09:16 np0005604375 lvm[241292]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:09:16 np0005604375 lvm[241292]: VG ceph_vg0 finished
Feb  1 10:09:16 np0005604375 podman[241262]: 2026-02-01 15:09:16.506110451 +0000 UTC m=+0.081825233 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  1 10:09:16 np0005604375 distracted_ishizaka[241187]: {}
Feb  1 10:09:16 np0005604375 podman[241263]: 2026-02-01 15:09:16.556931765 +0000 UTC m=+0.122193914 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller)
Feb  1 10:09:16 np0005604375 systemd[1]: libpod-d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76.scope: Deactivated successfully.
Feb  1 10:09:16 np0005604375 systemd[1]: libpod-d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76.scope: Consumed 1.084s CPU time.
Feb  1 10:09:16 np0005604375 podman[241171]: 2026-02-01 15:09:16.590573338 +0000 UTC m=+0.896027304 container died d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ishizaka, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  1 10:09:16 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f1da56caaf3deb125b751ac562ac0c72270af11a04ada87c00a2a7230b9263e9-merged.mount: Deactivated successfully.
Feb  1 10:09:16 np0005604375 podman[241171]: 2026-02-01 15:09:16.633645324 +0000 UTC m=+0.939099290 container remove d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ishizaka, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:09:16 np0005604375 systemd[1]: libpod-conmon-d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76.scope: Deactivated successfully.
Feb  1 10:09:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:09:16 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:09:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:09:16 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:09:17 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:09:17 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:09:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:09:17
Feb  1 10:09:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:09:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:09:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.control']
Feb  1 10:09:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:09:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:09:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.043665) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958561043689, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1344, "num_deletes": 505, "total_data_size": 1631077, "memory_usage": 1659664, "flush_reason": "Manual Compaction"}
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958561050335, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1604644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13615, "largest_seqno": 14958, "table_properties": {"data_size": 1598706, "index_size": 2758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 14990, "raw_average_key_size": 18, "raw_value_size": 1584871, "raw_average_value_size": 1911, "num_data_blocks": 126, "num_entries": 829, "num_filter_entries": 829, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958457, "oldest_key_time": 1769958457, "file_creation_time": 1769958561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 6708 microseconds, and 2921 cpu microseconds.
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.050372) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1604644 bytes OK
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.050389) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.051444) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.051457) EVENT_LOG_v1 {"time_micros": 1769958561051453, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.051472) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1623972, prev total WAL file size 1623972, number of live WAL files 2.
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.051862) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1567KB)], [32(7659KB)]
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958561051893, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9447580, "oldest_snapshot_seqno": -1}
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3850 keys, 7479501 bytes, temperature: kUnknown
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958561089481, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7479501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7451875, "index_size": 16892, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 94165, "raw_average_key_size": 24, "raw_value_size": 7380321, "raw_average_value_size": 1916, "num_data_blocks": 717, "num_entries": 3850, "num_filter_entries": 3850, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.089797) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7479501 bytes
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.090861) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 250.6 rd, 198.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.5 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(10.5) write-amplify(4.7) OK, records in: 4873, records dropped: 1023 output_compression: NoCompression
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.090890) EVENT_LOG_v1 {"time_micros": 1769958561090875, "job": 14, "event": "compaction_finished", "compaction_time_micros": 37695, "compaction_time_cpu_micros": 11688, "output_level": 6, "num_output_files": 1, "total_output_size": 7479501, "num_input_records": 4873, "num_output_records": 3850, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958561091272, "job": 14, "event": "table_file_deletion", "file_number": 34}
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958561092480, "job": 14, "event": "table_file_deletion", "file_number": 32}
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.051802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.092558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.092562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.092564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.092566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:09:21 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.092568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:09:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:09:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:09:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:09:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:09:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:09:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:09:41 np0005604375 nova_compute[238794]: 2026-02-01 15:09:41.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:09:41 np0005604375 nova_compute[238794]: 2026-02-01 15:09:41.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:09:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:42 np0005604375 nova_compute[238794]: 2026-02-01 15:09:42.316 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:09:42 np0005604375 nova_compute[238794]: 2026-02-01 15:09:42.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:09:42 np0005604375 nova_compute[238794]: 2026-02-01 15:09:42.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:09:42 np0005604375 nova_compute[238794]: 2026-02-01 15:09:42.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:09:42 np0005604375 nova_compute[238794]: 2026-02-01 15:09:42.540 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:09:42 np0005604375 nova_compute[238794]: 2026-02-01 15:09:42.542 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:09:42 np0005604375 nova_compute[238794]: 2026-02-01 15:09:42.542 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:09:43 np0005604375 nova_compute[238794]: 2026-02-01 15:09:43.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:09:43 np0005604375 nova_compute[238794]: 2026-02-01 15:09:43.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:09:43 np0005604375 nova_compute[238794]: 2026-02-01 15:09:43.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:09:43 np0005604375 nova_compute[238794]: 2026-02-01 15:09:43.558 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:09:43 np0005604375 nova_compute[238794]: 2026-02-01 15:09:43.559 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:09:43 np0005604375 nova_compute[238794]: 2026-02-01 15:09:43.559 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:09:43 np0005604375 nova_compute[238794]: 2026-02-01 15:09:43.559 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:09:43 np0005604375 nova_compute[238794]: 2026-02-01 15:09:43.560 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:09:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:09:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831456260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:09:44 np0005604375 nova_compute[238794]: 2026-02-01 15:09:44.118 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:09:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:44 np0005604375 nova_compute[238794]: 2026-02-01 15:09:44.247 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:09:44 np0005604375 nova_compute[238794]: 2026-02-01 15:09:44.248 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5114MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:09:44 np0005604375 nova_compute[238794]: 2026-02-01 15:09:44.248 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:09:44 np0005604375 nova_compute[238794]: 2026-02-01 15:09:44.248 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:09:44 np0005604375 nova_compute[238794]: 2026-02-01 15:09:44.958 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:09:44 np0005604375 nova_compute[238794]: 2026-02-01 15:09:44.958 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:09:44 np0005604375 nova_compute[238794]: 2026-02-01 15:09:44.979 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:09:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:09:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2647746395' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:09:45 np0005604375 nova_compute[238794]: 2026-02-01 15:09:45.530 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:09:45 np0005604375 nova_compute[238794]: 2026-02-01 15:09:45.534 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:09:45 np0005604375 nova_compute[238794]: 2026-02-01 15:09:45.584 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:09:45 np0005604375 nova_compute[238794]: 2026-02-01 15:09:45.586 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:09:45 np0005604375 nova_compute[238794]: 2026-02-01 15:09:45.586 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:09:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:09:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:46 np0005604375 nova_compute[238794]: 2026-02-01 15:09:46.586 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:09:46 np0005604375 podman[241396]: 2026-02-01 15:09:46.974789154 +0000 UTC m=+0.057717058 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  1 10:09:46 np0005604375 podman[241397]: 2026-02-01 15:09:46.993988831 +0000 UTC m=+0.077419729 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb  1 10:09:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:09:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:09:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:09:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:09:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:09:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:09:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:09:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3327240749' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:09:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:09:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3327240749' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:09:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:09:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:09:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:09:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:00 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:10:00 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3385 writes, 15K keys, 3385 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3385 writes, 3385 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1288 writes, 5858 keys, 1288 commit groups, 1.0 writes per commit group, ingest: 8.63 MB, 0.01 MB/s#012Interval WAL: 1288 writes, 1288 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    172.0      0.09              0.04         7    0.013       0      0       0.0       0.0#012  L6      1/0    7.13 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6    248.9    205.1      0.21              0.09         6    0.034     24K   3194       0.0       0.0#012 Sum      1/0    7.13 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6    171.0    194.8      0.30              0.13        13    0.023     24K   3194       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    198.5    200.5      0.18              0.07         8    0.022     17K   2464       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    248.9    205.1      0.21              0.09         6    0.034     24K   3194       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    181.5      0.09              0.04         6    0.015       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.016, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.3 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5635c5d4b8d0#2 capacity: 308.00 MB usage: 1.83 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(106,1.61 MB,0.522832%) FilterBlock(14,74.98 KB,0.023775%) IndexBlock(14,153.55 KB,0.0486845%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  1 10:10:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:10:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:10:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:10:07.805 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:10:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:10:07.805 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:10:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:10:07.805 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:10:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:10:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:10:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:10:17 np0005604375 podman[241547]: 2026-02-01 15:10:17.689803658 +0000 UTC m=+0.113671886 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  1 10:10:17 np0005604375 podman[241548]: 2026-02-01 15:10:17.709273343 +0000 UTC m=+0.132153453 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 10:10:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:10:17
Feb  1 10:10:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:10:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:10:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'volumes', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control']
Feb  1 10:10:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:10:17 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:10:17 np0005604375 podman[241628]: 2026-02-01 15:10:17.888922096 +0000 UTC m=+0.046562135 container create 7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_golick, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  1 10:10:17 np0005604375 systemd[1]: Started libpod-conmon-7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b.scope.
Feb  1 10:10:17 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:10:17 np0005604375 podman[241628]: 2026-02-01 15:10:17.863466673 +0000 UTC m=+0.021106812 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:10:17 np0005604375 podman[241628]: 2026-02-01 15:10:17.962983721 +0000 UTC m=+0.120623850 container init 7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_golick, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 10:10:17 np0005604375 podman[241628]: 2026-02-01 15:10:17.971889411 +0000 UTC m=+0.129529460 container start 7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_golick, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  1 10:10:17 np0005604375 podman[241628]: 2026-02-01 15:10:17.975958815 +0000 UTC m=+0.133598954 container attach 7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_golick, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:10:17 np0005604375 systemd[1]: libpod-7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b.scope: Deactivated successfully.
Feb  1 10:10:17 np0005604375 blissful_golick[241645]: 167 167
Feb  1 10:10:17 np0005604375 podman[241628]: 2026-02-01 15:10:17.978381862 +0000 UTC m=+0.136021931 container died 7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_golick, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:10:17 np0005604375 conmon[241645]: conmon 7060d8f9c453f66fa248 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b.scope/container/memory.events
Feb  1 10:10:18 np0005604375 systemd[1]: var-lib-containers-storage-overlay-33e09286c57d7b73ad0b2d887f4565c958e61e46d31ddc8467c1f9923fbd00b5-merged.mount: Deactivated successfully.
Feb  1 10:10:18 np0005604375 podman[241628]: 2026-02-01 15:10:18.024945337 +0000 UTC m=+0.182585416 container remove 7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_golick, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:10:18 np0005604375 systemd[1]: libpod-conmon-7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b.scope: Deactivated successfully.
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:18 np0005604375 podman[241667]: 2026-02-01 15:10:18.205957428 +0000 UTC m=+0.054494748 container create f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mcnulty, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  1 10:10:18 np0005604375 systemd[1]: Started libpod-conmon-f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977.scope.
Feb  1 10:10:18 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:10:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e832c77ac4acfa2a3fcedc7f7466f76ba05d50af62ee4c577cec87b0f2406b68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:10:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e832c77ac4acfa2a3fcedc7f7466f76ba05d50af62ee4c577cec87b0f2406b68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:10:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e832c77ac4acfa2a3fcedc7f7466f76ba05d50af62ee4c577cec87b0f2406b68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:10:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e832c77ac4acfa2a3fcedc7f7466f76ba05d50af62ee4c577cec87b0f2406b68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:10:18 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e832c77ac4acfa2a3fcedc7f7466f76ba05d50af62ee4c577cec87b0f2406b68/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:10:18 np0005604375 podman[241667]: 2026-02-01 15:10:18.179182688 +0000 UTC m=+0.027720098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:10:18 np0005604375 podman[241667]: 2026-02-01 15:10:18.300460046 +0000 UTC m=+0.148997386 container init f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  1 10:10:18 np0005604375 podman[241667]: 2026-02-01 15:10:18.308901102 +0000 UTC m=+0.157438452 container start f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 10:10:18 np0005604375 podman[241667]: 2026-02-01 15:10:18.313593174 +0000 UTC m=+0.162130504 container attach f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mcnulty, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:10:18 np0005604375 peaceful_mcnulty[241683]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:10:18 np0005604375 peaceful_mcnulty[241683]: --> All data devices are unavailable
Feb  1 10:10:18 np0005604375 systemd[1]: libpod-f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977.scope: Deactivated successfully.
Feb  1 10:10:18 np0005604375 podman[241667]: 2026-02-01 15:10:18.755145084 +0000 UTC m=+0.603682404 container died f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  1 10:10:18 np0005604375 systemd[1]: var-lib-containers-storage-overlay-e832c77ac4acfa2a3fcedc7f7466f76ba05d50af62ee4c577cec87b0f2406b68-merged.mount: Deactivated successfully.
Feb  1 10:10:18 np0005604375 podman[241667]: 2026-02-01 15:10:18.793353044 +0000 UTC m=+0.641890354 container remove f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:10:18 np0005604375 systemd[1]: libpod-conmon-f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977.scope: Deactivated successfully.
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:10:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:10:19 np0005604375 podman[241778]: 2026-02-01 15:10:19.220433368 +0000 UTC m=+0.029846487 container create b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:10:19 np0005604375 systemd[1]: Started libpod-conmon-b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8.scope.
Feb  1 10:10:19 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:10:19 np0005604375 podman[241778]: 2026-02-01 15:10:19.282781675 +0000 UTC m=+0.092194804 container init b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  1 10:10:19 np0005604375 podman[241778]: 2026-02-01 15:10:19.286397786 +0000 UTC m=+0.095810915 container start b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kapitsa, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  1 10:10:19 np0005604375 systemd[1]: libpod-b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8.scope: Deactivated successfully.
Feb  1 10:10:19 np0005604375 blissful_kapitsa[241795]: 167 167
Feb  1 10:10:19 np0005604375 podman[241778]: 2026-02-01 15:10:19.290159642 +0000 UTC m=+0.099572771 container attach b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:10:19 np0005604375 conmon[241795]: conmon b45492885904ff6b0e1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8.scope/container/memory.events
Feb  1 10:10:19 np0005604375 podman[241778]: 2026-02-01 15:10:19.29045404 +0000 UTC m=+0.099867169 container died b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kapitsa, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:10:19 np0005604375 podman[241778]: 2026-02-01 15:10:19.206833077 +0000 UTC m=+0.016246236 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:10:19 np0005604375 systemd[1]: var-lib-containers-storage-overlay-3fa7887fba4f86234be9b0b7530fa74aeec0263c112712398106ea6611231215-merged.mount: Deactivated successfully.
Feb  1 10:10:19 np0005604375 podman[241778]: 2026-02-01 15:10:19.322107957 +0000 UTC m=+0.131521076 container remove b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kapitsa, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:10:19 np0005604375 systemd[1]: libpod-conmon-b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8.scope: Deactivated successfully.
Feb  1 10:10:19 np0005604375 podman[241819]: 2026-02-01 15:10:19.433502188 +0000 UTC m=+0.034157538 container create a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khayyam, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:10:19 np0005604375 systemd[1]: Started libpod-conmon-a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af.scope.
Feb  1 10:10:19 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:10:19 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682e0d0174d941d4fec23e6d20bd638e37bac1b2a6899668aad7cdc984e86e7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:10:19 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682e0d0174d941d4fec23e6d20bd638e37bac1b2a6899668aad7cdc984e86e7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:10:19 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682e0d0174d941d4fec23e6d20bd638e37bac1b2a6899668aad7cdc984e86e7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:10:19 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682e0d0174d941d4fec23e6d20bd638e37bac1b2a6899668aad7cdc984e86e7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:10:19 np0005604375 podman[241819]: 2026-02-01 15:10:19.51390957 +0000 UTC m=+0.114565010 container init a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:10:19 np0005604375 podman[241819]: 2026-02-01 15:10:19.418783825 +0000 UTC m=+0.019439205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:10:19 np0005604375 podman[241819]: 2026-02-01 15:10:19.519590519 +0000 UTC m=+0.120245889 container start a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:10:19 np0005604375 podman[241819]: 2026-02-01 15:10:19.522941833 +0000 UTC m=+0.123597183 container attach a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khayyam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]: {
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:    "0": [
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:        {
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "devices": [
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "/dev/loop3"
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            ],
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_name": "ceph_lv0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_size": "21470642176",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "name": "ceph_lv0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "tags": {
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.cluster_name": "ceph",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.crush_device_class": "",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.encrypted": "0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.objectstore": "bluestore",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.osd_id": "0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.type": "block",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.vdo": "0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.with_tpm": "0"
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            },
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "type": "block",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "vg_name": "ceph_vg0"
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:        }
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:    ],
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:    "1": [
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:        {
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "devices": [
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "/dev/loop4"
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            ],
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_name": "ceph_lv1",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_size": "21470642176",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "name": "ceph_lv1",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "tags": {
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.cluster_name": "ceph",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.crush_device_class": "",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.encrypted": "0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.objectstore": "bluestore",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.osd_id": "1",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.type": "block",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.vdo": "0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.with_tpm": "0"
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            },
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "type": "block",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "vg_name": "ceph_vg1"
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:        }
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:    ],
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:    "2": [
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:        {
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "devices": [
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "/dev/loop5"
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            ],
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_name": "ceph_lv2",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_size": "21470642176",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "name": "ceph_lv2",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "tags": {
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.cluster_name": "ceph",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.crush_device_class": "",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.encrypted": "0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.objectstore": "bluestore",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.osd_id": "2",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.type": "block",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.vdo": "0",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:                "ceph.with_tpm": "0"
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            },
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "type": "block",
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:            "vg_name": "ceph_vg2"
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:        }
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]:    ]
Feb  1 10:10:19 np0005604375 focused_khayyam[241836]: }
Feb  1 10:10:19 np0005604375 systemd[1]: libpod-a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af.scope: Deactivated successfully.
Feb  1 10:10:19 np0005604375 podman[241819]: 2026-02-01 15:10:19.760837598 +0000 UTC m=+0.361492938 container died a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Feb  1 10:10:19 np0005604375 systemd[1]: var-lib-containers-storage-overlay-682e0d0174d941d4fec23e6d20bd638e37bac1b2a6899668aad7cdc984e86e7f-merged.mount: Deactivated successfully.
Feb  1 10:10:19 np0005604375 podman[241819]: 2026-02-01 15:10:19.799580124 +0000 UTC m=+0.400235474 container remove a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khayyam, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:10:19 np0005604375 systemd[1]: libpod-conmon-a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af.scope: Deactivated successfully.
Feb  1 10:10:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:20 np0005604375 podman[241919]: 2026-02-01 15:10:20.206233956 +0000 UTC m=+0.049325943 container create 1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  1 10:10:20 np0005604375 systemd[1]: Started libpod-conmon-1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41.scope.
Feb  1 10:10:20 np0005604375 podman[241919]: 2026-02-01 15:10:20.181715799 +0000 UTC m=+0.024807806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:10:20 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:10:20 np0005604375 podman[241919]: 2026-02-01 15:10:20.294123649 +0000 UTC m=+0.137215646 container init 1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 10:10:20 np0005604375 podman[241919]: 2026-02-01 15:10:20.302330388 +0000 UTC m=+0.145422355 container start 1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:10:20 np0005604375 practical_yonath[241935]: 167 167
Feb  1 10:10:20 np0005604375 systemd[1]: libpod-1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41.scope: Deactivated successfully.
Feb  1 10:10:20 np0005604375 podman[241919]: 2026-02-01 15:10:20.307521414 +0000 UTC m=+0.150613491 container attach 1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  1 10:10:20 np0005604375 podman[241919]: 2026-02-01 15:10:20.30846427 +0000 UTC m=+0.151556267 container died 1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Feb  1 10:10:20 np0005604375 systemd[1]: var-lib-containers-storage-overlay-2500c1f01c87256a3615bdbab2673e74dbd901f51ab80a229c94e6c019ccd281-merged.mount: Deactivated successfully.
Feb  1 10:10:20 np0005604375 podman[241919]: 2026-02-01 15:10:20.402427653 +0000 UTC m=+0.245519620 container remove 1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  1 10:10:20 np0005604375 systemd[1]: libpod-conmon-1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41.scope: Deactivated successfully.
Feb  1 10:10:20 np0005604375 podman[241959]: 2026-02-01 15:10:20.563219898 +0000 UTC m=+0.052182083 container create 9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  1 10:10:20 np0005604375 systemd[1]: Started libpod-conmon-9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8.scope.
Feb  1 10:10:20 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:10:20 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9455396a6683e206ddbe717e1d7d07c08a165d505e7bc8163c26e3d9af9d9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:10:20 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9455396a6683e206ddbe717e1d7d07c08a165d505e7bc8163c26e3d9af9d9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:10:20 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9455396a6683e206ddbe717e1d7d07c08a165d505e7bc8163c26e3d9af9d9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:10:20 np0005604375 podman[241959]: 2026-02-01 15:10:20.546264113 +0000 UTC m=+0.035226298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:10:20 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9455396a6683e206ddbe717e1d7d07c08a165d505e7bc8163c26e3d9af9d9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:10:20 np0005604375 podman[241959]: 2026-02-01 15:10:20.663738264 +0000 UTC m=+0.152700479 container init 9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_carson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:10:20 np0005604375 podman[241959]: 2026-02-01 15:10:20.675147673 +0000 UTC m=+0.164109858 container start 9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  1 10:10:20 np0005604375 podman[241959]: 2026-02-01 15:10:20.678418435 +0000 UTC m=+0.167380860 container attach 9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  1 10:10:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:10:21 np0005604375 lvm[242054]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:10:21 np0005604375 lvm[242053]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:10:21 np0005604375 lvm[242053]: VG ceph_vg0 finished
Feb  1 10:10:21 np0005604375 lvm[242054]: VG ceph_vg1 finished
Feb  1 10:10:21 np0005604375 lvm[242056]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:10:21 np0005604375 lvm[242056]: VG ceph_vg2 finished
Feb  1 10:10:21 np0005604375 zen_carson[241975]: {}
Feb  1 10:10:21 np0005604375 systemd[1]: libpod-9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8.scope: Deactivated successfully.
Feb  1 10:10:21 np0005604375 systemd[1]: libpod-9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8.scope: Consumed 1.159s CPU time.
Feb  1 10:10:21 np0005604375 podman[241959]: 2026-02-01 15:10:21.492126011 +0000 UTC m=+0.981088196 container died 9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  1 10:10:21 np0005604375 systemd[1]: var-lib-containers-storage-overlay-0f9455396a6683e206ddbe717e1d7d07c08a165d505e7bc8163c26e3d9af9d9a-merged.mount: Deactivated successfully.
Feb  1 10:10:21 np0005604375 podman[241959]: 2026-02-01 15:10:21.534983282 +0000 UTC m=+1.023945467 container remove 9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb  1 10:10:21 np0005604375 systemd[1]: libpod-conmon-9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8.scope: Deactivated successfully.
Feb  1 10:10:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:10:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:10:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:10:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:10:21 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:10:21 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:10:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:10:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:10:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:10:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:10:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:10:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:42 np0005604375 nova_compute[238794]: 2026-02-01 15:10:42.315 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:10:42 np0005604375 nova_compute[238794]: 2026-02-01 15:10:42.316 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:10:43 np0005604375 nova_compute[238794]: 2026-02-01 15:10:43.469 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:10:43 np0005604375 nova_compute[238794]: 2026-02-01 15:10:43.469 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:10:43 np0005604375 nova_compute[238794]: 2026-02-01 15:10:43.470 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:10:43 np0005604375 nova_compute[238794]: 2026-02-01 15:10:43.689 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:10:43 np0005604375 nova_compute[238794]: 2026-02-01 15:10:43.690 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:10:43 np0005604375 nova_compute[238794]: 2026-02-01 15:10:43.690 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:10:43 np0005604375 nova_compute[238794]: 2026-02-01 15:10:43.691 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:10:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:44 np0005604375 nova_compute[238794]: 2026-02-01 15:10:44.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:10:44 np0005604375 nova_compute[238794]: 2026-02-01 15:10:44.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:10:44 np0005604375 nova_compute[238794]: 2026-02-01 15:10:44.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:10:44 np0005604375 nova_compute[238794]: 2026-02-01 15:10:44.343 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:10:44 np0005604375 nova_compute[238794]: 2026-02-01 15:10:44.343 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:10:44 np0005604375 nova_compute[238794]: 2026-02-01 15:10:44.343 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:10:44 np0005604375 nova_compute[238794]: 2026-02-01 15:10:44.343 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:10:44 np0005604375 nova_compute[238794]: 2026-02-01 15:10:44.344 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:10:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:10:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4072529224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:10:44 np0005604375 nova_compute[238794]: 2026-02-01 15:10:44.919 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:10:45 np0005604375 nova_compute[238794]: 2026-02-01 15:10:45.048 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:10:45 np0005604375 nova_compute[238794]: 2026-02-01 15:10:45.049 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5120MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:10:45 np0005604375 nova_compute[238794]: 2026-02-01 15:10:45.050 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:10:45 np0005604375 nova_compute[238794]: 2026-02-01 15:10:45.050 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:10:45 np0005604375 nova_compute[238794]: 2026-02-01 15:10:45.119 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:10:45 np0005604375 nova_compute[238794]: 2026-02-01 15:10:45.120 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:10:45 np0005604375 nova_compute[238794]: 2026-02-01 15:10:45.138 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:10:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:10:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491844404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:10:45 np0005604375 nova_compute[238794]: 2026-02-01 15:10:45.636 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:10:45 np0005604375 nova_compute[238794]: 2026-02-01 15:10:45.643 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:10:45 np0005604375 nova_compute[238794]: 2026-02-01 15:10:45.663 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:10:45 np0005604375 nova_compute[238794]: 2026-02-01 15:10:45.666 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:10:45 np0005604375 nova_compute[238794]: 2026-02-01 15:10:45.667 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:10:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:10:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:46 np0005604375 nova_compute[238794]: 2026-02-01 15:10:46.668 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:10:46 np0005604375 nova_compute[238794]: 2026-02-01 15:10:46.668 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:10:47 np0005604375 podman[242140]: 2026-02-01 15:10:47.991663247 +0000 UTC m=+0.076685540 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb  1 10:10:48 np0005604375 podman[242141]: 2026-02-01 15:10:48.025153195 +0000 UTC m=+0.105029563 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb  1 10:10:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:10:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:10:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:10:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:10:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:10:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:10:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:10:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3244925717' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:10:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:10:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3244925717' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:10:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:10:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:10:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:10:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:11:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:11:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 5863 writes, 24K keys, 5863 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5863 writes, 1012 syncs, 5.79 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Feb  1 10:11:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:11:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7147 writes, 29K keys, 7147 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7147 writes, 1430 syncs, 5.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Feb  1 10:11:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:11:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:11:07.806 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:11:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:11:07.807 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:11:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:11:07.807 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:11:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:11:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5731 writes, 24K keys, 5731 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5731 writes, 924 syncs, 6.20 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb  1 10:11:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:11 np0005604375 ceph-mgr[75469]: [devicehealth INFO root] Check health
Feb  1 10:11:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:11:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:11:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:11:17
Feb  1 10:11:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:11:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:11:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'vms', 'volumes', '.mgr']
Feb  1 10:11:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:11:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:11:19 np0005604375 podman[242185]: 2026-02-01 15:11:19.014087245 +0000 UTC m=+0.092317662 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb  1 10:11:19 np0005604375 podman[242186]: 2026-02-01 15:11:19.045922799 +0000 UTC m=+0.126091161 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb  1 10:11:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:11:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:11:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:11:22 np0005604375 podman[242445]: 2026-02-01 15:11:22.962911588 +0000 UTC m=+0.050821668 container create d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_haslett, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  1 10:11:23 np0005604375 systemd[1]: Started libpod-conmon-d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660.scope.
Feb  1 10:11:23 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:11:23 np0005604375 podman[242445]: 2026-02-01 15:11:22.935034115 +0000 UTC m=+0.022944265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:11:23 np0005604375 podman[242445]: 2026-02-01 15:11:23.034473266 +0000 UTC m=+0.122383386 container init d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_haslett, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  1 10:11:23 np0005604375 podman[242445]: 2026-02-01 15:11:23.039704973 +0000 UTC m=+0.127615063 container start d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  1 10:11:23 np0005604375 epic_haslett[242461]: 167 167
Feb  1 10:11:23 np0005604375 systemd[1]: libpod-d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660.scope: Deactivated successfully.
Feb  1 10:11:23 np0005604375 conmon[242461]: conmon d6d77084c45039005eff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660.scope/container/memory.events
Feb  1 10:11:23 np0005604375 podman[242445]: 2026-02-01 15:11:23.045484896 +0000 UTC m=+0.133394946 container attach d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  1 10:11:23 np0005604375 podman[242445]: 2026-02-01 15:11:23.046531865 +0000 UTC m=+0.134441915 container died d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:11:23 np0005604375 systemd[1]: var-lib-containers-storage-overlay-fefb93f468b8d9fbb66eef2b55b7c017071d3e1eaa15fa28fbd23f394822f2e0-merged.mount: Deactivated successfully.
Feb  1 10:11:23 np0005604375 podman[242445]: 2026-02-01 15:11:23.091574619 +0000 UTC m=+0.179484699 container remove d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_haslett, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:11:23 np0005604375 systemd[1]: libpod-conmon-d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660.scope: Deactivated successfully.
Feb  1 10:11:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:11:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:11:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:11:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:11:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:11:23 np0005604375 podman[242486]: 2026-02-01 15:11:23.227817914 +0000 UTC m=+0.055487149 container create 22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:11:23 np0005604375 systemd[1]: Started libpod-conmon-22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c.scope.
Feb  1 10:11:23 np0005604375 podman[242486]: 2026-02-01 15:11:23.205121977 +0000 UTC m=+0.032791262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:11:23 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:11:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21c373f9048fd0f16ac8496ed2e5bc0dd77932e70682966a2763f0933248898/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:11:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21c373f9048fd0f16ac8496ed2e5bc0dd77932e70682966a2763f0933248898/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:11:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21c373f9048fd0f16ac8496ed2e5bc0dd77932e70682966a2763f0933248898/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:11:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21c373f9048fd0f16ac8496ed2e5bc0dd77932e70682966a2763f0933248898/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:11:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21c373f9048fd0f16ac8496ed2e5bc0dd77932e70682966a2763f0933248898/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:11:23 np0005604375 podman[242486]: 2026-02-01 15:11:23.343535272 +0000 UTC m=+0.171204527 container init 22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_villani, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:11:23 np0005604375 podman[242486]: 2026-02-01 15:11:23.361342742 +0000 UTC m=+0.189011977 container start 22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_villani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:11:23 np0005604375 podman[242486]: 2026-02-01 15:11:23.367211246 +0000 UTC m=+0.194880491 container attach 22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  1 10:11:23 np0005604375 nice_villani[242502]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:11:23 np0005604375 nice_villani[242502]: --> All data devices are unavailable
Feb  1 10:11:23 np0005604375 systemd[1]: libpod-22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c.scope: Deactivated successfully.
Feb  1 10:11:23 np0005604375 podman[242486]: 2026-02-01 15:11:23.878106306 +0000 UTC m=+0.705775581 container died 22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:11:23 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c21c373f9048fd0f16ac8496ed2e5bc0dd77932e70682966a2763f0933248898-merged.mount: Deactivated successfully.
Feb  1 10:11:23 np0005604375 podman[242486]: 2026-02-01 15:11:23.931457194 +0000 UTC m=+0.759126429 container remove 22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  1 10:11:23 np0005604375 systemd[1]: libpod-conmon-22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c.scope: Deactivated successfully.
Feb  1 10:11:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:24 np0005604375 podman[242596]: 2026-02-01 15:11:24.377929097 +0000 UTC m=+0.090811991 container create 8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  1 10:11:24 np0005604375 podman[242596]: 2026-02-01 15:11:24.306978145 +0000 UTC m=+0.019861059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:11:24 np0005604375 systemd[1]: Started libpod-conmon-8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15.scope.
Feb  1 10:11:24 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:11:24 np0005604375 podman[242596]: 2026-02-01 15:11:24.515053036 +0000 UTC m=+0.227935950 container init 8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  1 10:11:24 np0005604375 podman[242596]: 2026-02-01 15:11:24.520505199 +0000 UTC m=+0.233388093 container start 8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lovelace, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  1 10:11:24 np0005604375 gallant_lovelace[242612]: 167 167
Feb  1 10:11:24 np0005604375 systemd[1]: libpod-8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15.scope: Deactivated successfully.
Feb  1 10:11:24 np0005604375 podman[242596]: 2026-02-01 15:11:24.549362239 +0000 UTC m=+0.262245103 container attach 8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lovelace, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  1 10:11:24 np0005604375 podman[242596]: 2026-02-01 15:11:24.549700128 +0000 UTC m=+0.262582992 container died 8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lovelace, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:11:24 np0005604375 systemd[1]: var-lib-containers-storage-overlay-45fed254ad37dadc72cf335bf02ee3b15035bed87fa7bdb5fe75e648eef2a23e-merged.mount: Deactivated successfully.
Feb  1 10:11:24 np0005604375 podman[242596]: 2026-02-01 15:11:24.594938668 +0000 UTC m=+0.307821552 container remove 8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:11:24 np0005604375 systemd[1]: libpod-conmon-8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15.scope: Deactivated successfully.
Feb  1 10:11:24 np0005604375 podman[242638]: 2026-02-01 15:11:24.749357793 +0000 UTC m=+0.061097666 container create 079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  1 10:11:24 np0005604375 systemd[1]: Started libpod-conmon-079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571.scope.
Feb  1 10:11:24 np0005604375 podman[242638]: 2026-02-01 15:11:24.724349751 +0000 UTC m=+0.036089674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:11:24 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:11:24 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4509008f6ae4e1a135228a4d048847582e15bb69daf54daaf41d4e6d56ea0a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:11:24 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4509008f6ae4e1a135228a4d048847582e15bb69daf54daaf41d4e6d56ea0a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:11:24 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4509008f6ae4e1a135228a4d048847582e15bb69daf54daaf41d4e6d56ea0a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:11:24 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4509008f6ae4e1a135228a4d048847582e15bb69daf54daaf41d4e6d56ea0a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:11:24 np0005604375 podman[242638]: 2026-02-01 15:11:24.885440923 +0000 UTC m=+0.197180876 container init 079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_noether, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb  1 10:11:24 np0005604375 podman[242638]: 2026-02-01 15:11:24.89533395 +0000 UTC m=+0.207073823 container start 079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_noether, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  1 10:11:24 np0005604375 podman[242638]: 2026-02-01 15:11:24.898827879 +0000 UTC m=+0.210567752 container attach 079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_noether, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:11:25 np0005604375 brave_noether[242654]: {
Feb  1 10:11:25 np0005604375 brave_noether[242654]:    "0": [
Feb  1 10:11:25 np0005604375 brave_noether[242654]:        {
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "devices": [
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "/dev/loop3"
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            ],
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_name": "ceph_lv0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_size": "21470642176",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "name": "ceph_lv0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "tags": {
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.cluster_name": "ceph",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.crush_device_class": "",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.encrypted": "0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.objectstore": "bluestore",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.osd_id": "0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.type": "block",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.vdo": "0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.with_tpm": "0"
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            },
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "type": "block",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "vg_name": "ceph_vg0"
Feb  1 10:11:25 np0005604375 brave_noether[242654]:        }
Feb  1 10:11:25 np0005604375 brave_noether[242654]:    ],
Feb  1 10:11:25 np0005604375 brave_noether[242654]:    "1": [
Feb  1 10:11:25 np0005604375 brave_noether[242654]:        {
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "devices": [
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "/dev/loop4"
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            ],
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_name": "ceph_lv1",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_size": "21470642176",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "name": "ceph_lv1",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "tags": {
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.cluster_name": "ceph",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.crush_device_class": "",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.encrypted": "0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.objectstore": "bluestore",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.osd_id": "1",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.type": "block",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.vdo": "0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.with_tpm": "0"
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            },
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "type": "block",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "vg_name": "ceph_vg1"
Feb  1 10:11:25 np0005604375 brave_noether[242654]:        }
Feb  1 10:11:25 np0005604375 brave_noether[242654]:    ],
Feb  1 10:11:25 np0005604375 brave_noether[242654]:    "2": [
Feb  1 10:11:25 np0005604375 brave_noether[242654]:        {
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "devices": [
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "/dev/loop5"
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            ],
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_name": "ceph_lv2",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_size": "21470642176",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "name": "ceph_lv2",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "tags": {
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.cluster_name": "ceph",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.crush_device_class": "",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.encrypted": "0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.objectstore": "bluestore",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.osd_id": "2",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.type": "block",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.vdo": "0",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:                "ceph.with_tpm": "0"
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            },
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "type": "block",
Feb  1 10:11:25 np0005604375 brave_noether[242654]:            "vg_name": "ceph_vg2"
Feb  1 10:11:25 np0005604375 brave_noether[242654]:        }
Feb  1 10:11:25 np0005604375 brave_noether[242654]:    ]
Feb  1 10:11:25 np0005604375 brave_noether[242654]: }
Feb  1 10:11:25 np0005604375 systemd[1]: libpod-079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571.scope: Deactivated successfully.
Feb  1 10:11:25 np0005604375 podman[242638]: 2026-02-01 15:11:25.177370537 +0000 UTC m=+0.489110440 container died 079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_noether, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  1 10:11:25 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f4509008f6ae4e1a135228a4d048847582e15bb69daf54daaf41d4e6d56ea0a4-merged.mount: Deactivated successfully.
Feb  1 10:11:25 np0005604375 podman[242638]: 2026-02-01 15:11:25.225439567 +0000 UTC m=+0.537179450 container remove 079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:11:25 np0005604375 systemd[1]: libpod-conmon-079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571.scope: Deactivated successfully.
Feb  1 10:11:25 np0005604375 podman[242740]: 2026-02-01 15:11:25.675792748 +0000 UTC m=+0.041622269 container create a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  1 10:11:25 np0005604375 systemd[1]: Started libpod-conmon-a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7.scope.
Feb  1 10:11:25 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:11:25 np0005604375 podman[242740]: 2026-02-01 15:11:25.659507181 +0000 UTC m=+0.025336732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:11:25 np0005604375 podman[242740]: 2026-02-01 15:11:25.774660543 +0000 UTC m=+0.140490054 container init a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_carver, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb  1 10:11:25 np0005604375 podman[242740]: 2026-02-01 15:11:25.782142603 +0000 UTC m=+0.147972134 container start a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_carver, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Feb  1 10:11:25 np0005604375 elastic_carver[242757]: 167 167
Feb  1 10:11:25 np0005604375 systemd[1]: libpod-a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7.scope: Deactivated successfully.
Feb  1 10:11:25 np0005604375 conmon[242757]: conmon a0c3cb91ed4ac361203e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7.scope/container/memory.events
Feb  1 10:11:25 np0005604375 podman[242740]: 2026-02-01 15:11:25.805589772 +0000 UTC m=+0.171419293 container attach a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_carver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:11:25 np0005604375 podman[242740]: 2026-02-01 15:11:25.806090186 +0000 UTC m=+0.171919707 container died a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_carver, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  1 10:11:25 np0005604375 systemd[1]: var-lib-containers-storage-overlay-cdb941e0555440a025aa4b9d2ddc26f13f0a0f015da126514d7d958d21166311-merged.mount: Deactivated successfully.
Feb  1 10:11:25 np0005604375 podman[242740]: 2026-02-01 15:11:25.910766924 +0000 UTC m=+0.276596445 container remove a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_carver, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:11:25 np0005604375 systemd[1]: libpod-conmon-a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7.scope: Deactivated successfully.
Feb  1 10:11:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:11:26 np0005604375 podman[242783]: 2026-02-01 15:11:26.07379305 +0000 UTC m=+0.052361471 container create 6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_clarke, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Feb  1 10:11:26 np0005604375 systemd[1]: Started libpod-conmon-6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7.scope.
Feb  1 10:11:26 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:11:26 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8424d85fb047a35f23da6223a6367d80c27719a965f4108d614416d554a48cd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:11:26 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8424d85fb047a35f23da6223a6367d80c27719a965f4108d614416d554a48cd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:11:26 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8424d85fb047a35f23da6223a6367d80c27719a965f4108d614416d554a48cd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:11:26 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8424d85fb047a35f23da6223a6367d80c27719a965f4108d614416d554a48cd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:11:26 np0005604375 podman[242783]: 2026-02-01 15:11:26.044965011 +0000 UTC m=+0.023533492 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:11:26 np0005604375 podman[242783]: 2026-02-01 15:11:26.157119179 +0000 UTC m=+0.135687640 container init 6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  1 10:11:26 np0005604375 podman[242783]: 2026-02-01 15:11:26.168853928 +0000 UTC m=+0.147422329 container start 6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_clarke, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  1 10:11:26 np0005604375 podman[242783]: 2026-02-01 15:11:26.172443919 +0000 UTC m=+0.151012390 container attach 6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:11:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:26 np0005604375 lvm[242879]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:11:26 np0005604375 lvm[242877]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:11:26 np0005604375 lvm[242877]: VG ceph_vg0 finished
Feb  1 10:11:26 np0005604375 lvm[242879]: VG ceph_vg1 finished
Feb  1 10:11:26 np0005604375 lvm[242881]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:11:26 np0005604375 lvm[242881]: VG ceph_vg2 finished
Feb  1 10:11:26 np0005604375 practical_clarke[242799]: {}
Feb  1 10:11:26 np0005604375 systemd[1]: libpod-6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7.scope: Deactivated successfully.
Feb  1 10:11:26 np0005604375 podman[242783]: 2026-02-01 15:11:26.909848918 +0000 UTC m=+0.888417309 container died 6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_clarke, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  1 10:11:27 np0005604375 systemd[1]: var-lib-containers-storage-overlay-8424d85fb047a35f23da6223a6367d80c27719a965f4108d614416d554a48cd0-merged.mount: Deactivated successfully.
Feb  1 10:11:27 np0005604375 podman[242783]: 2026-02-01 15:11:27.41615558 +0000 UTC m=+1.394723991 container remove 6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  1 10:11:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:11:27 np0005604375 systemd[1]: libpod-conmon-6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7.scope: Deactivated successfully.
Feb  1 10:11:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:11:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:11:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:11:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:28 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:11:28 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:11:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:11:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:11:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:40 np0005604375 nova_compute[238794]: 2026-02-01 15:11:40.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:11:40 np0005604375 nova_compute[238794]: 2026-02-01 15:11:40.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  1 10:11:40 np0005604375 nova_compute[238794]: 2026-02-01 15:11:40.437 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  1 10:11:40 np0005604375 nova_compute[238794]: 2026-02-01 15:11:40.439 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:11:40 np0005604375 nova_compute[238794]: 2026-02-01 15:11:40.440 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  1 10:11:40 np0005604375 nova_compute[238794]: 2026-02-01 15:11:40.634 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:11:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:11:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:43 np0005604375 nova_compute[238794]: 2026-02-01 15:11:43.924 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:11:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:44 np0005604375 nova_compute[238794]: 2026-02-01 15:11:44.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:11:44 np0005604375 nova_compute[238794]: 2026-02-01 15:11:44.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:11:44 np0005604375 nova_compute[238794]: 2026-02-01 15:11:44.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:11:44 np0005604375 nova_compute[238794]: 2026-02-01 15:11:44.400 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:11:44 np0005604375 nova_compute[238794]: 2026-02-01 15:11:44.401 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:11:44 np0005604375 nova_compute[238794]: 2026-02-01 15:11:44.401 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:11:45 np0005604375 nova_compute[238794]: 2026-02-01 15:11:45.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:11:45 np0005604375 nova_compute[238794]: 2026-02-01 15:11:45.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:11:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:46 np0005604375 nova_compute[238794]: 2026-02-01 15:11:46.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:11:46 np0005604375 nova_compute[238794]: 2026-02-01 15:11:46.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:11:46 np0005604375 nova_compute[238794]: 2026-02-01 15:11:46.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:11:46 np0005604375 nova_compute[238794]: 2026-02-01 15:11:46.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:11:46 np0005604375 nova_compute[238794]: 2026-02-01 15:11:46.352 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:11:46 np0005604375 nova_compute[238794]: 2026-02-01 15:11:46.352 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:11:46 np0005604375 nova_compute[238794]: 2026-02-01 15:11:46.353 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:11:46 np0005604375 nova_compute[238794]: 2026-02-01 15:11:46.353 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:11:46 np0005604375 nova_compute[238794]: 2026-02-01 15:11:46.353 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2765978889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:11:46 np0005604375 nova_compute[238794]: 2026-02-01 15:11:46.851 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.936884) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958706936935, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1385, "num_deletes": 251, "total_data_size": 2229066, "memory_usage": 2272792, "flush_reason": "Manual Compaction"}
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958706949313, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2186608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14959, "largest_seqno": 16343, "table_properties": {"data_size": 2180092, "index_size": 3715, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13372, "raw_average_key_size": 19, "raw_value_size": 2167047, "raw_average_value_size": 3182, "num_data_blocks": 170, "num_entries": 681, "num_filter_entries": 681, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958561, "oldest_key_time": 1769958561, "file_creation_time": 1769958706, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 12466 microseconds, and 3574 cpu microseconds.
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.949357) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2186608 bytes OK
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.949374) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.951115) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.951132) EVENT_LOG_v1 {"time_micros": 1769958706951127, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.951150) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2222912, prev total WAL file size 2222912, number of live WAL files 2.
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.951722) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2135KB)], [35(7304KB)]
Feb  1 10:11:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958706951800, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9666109, "oldest_snapshot_seqno": -1}
Feb  1 10:11:47 np0005604375 nova_compute[238794]: 2026-02-01 15:11:47.011 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:11:47 np0005604375 nova_compute[238794]: 2026-02-01 15:11:47.012 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5131MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:11:47 np0005604375 nova_compute[238794]: 2026-02-01 15:11:47.012 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:11:47 np0005604375 nova_compute[238794]: 2026-02-01 15:11:47.012 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4017 keys, 7857216 bytes, temperature: kUnknown
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958707020694, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7857216, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7828214, "index_size": 17884, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 98110, "raw_average_key_size": 24, "raw_value_size": 7753428, "raw_average_value_size": 1930, "num_data_blocks": 757, "num_entries": 4017, "num_filter_entries": 4017, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958706, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.020962) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7857216 bytes
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.022469) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.2 rd, 113.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.1 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(8.0) write-amplify(3.6) OK, records in: 4531, records dropped: 514 output_compression: NoCompression
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.022488) EVENT_LOG_v1 {"time_micros": 1769958707022479, "job": 16, "event": "compaction_finished", "compaction_time_micros": 68961, "compaction_time_cpu_micros": 27948, "output_level": 6, "num_output_files": 1, "total_output_size": 7857216, "num_input_records": 4531, "num_output_records": 4017, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958707022785, "job": 16, "event": "table_file_deletion", "file_number": 37}
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958707023433, "job": 16, "event": "table_file_deletion", "file_number": 35}
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.951621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.023503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.023510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.023512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.023514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:11:47 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.023516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:11:47 np0005604375 nova_compute[238794]: 2026-02-01 15:11:47.270 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:11:47 np0005604375 nova_compute[238794]: 2026-02-01 15:11:47.270 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:11:47 np0005604375 nova_compute[238794]: 2026-02-01 15:11:47.389 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing inventories for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  1 10:11:47 np0005604375 nova_compute[238794]: 2026-02-01 15:11:47.502 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Updating ProviderTree inventory for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  1 10:11:47 np0005604375 nova_compute[238794]: 2026-02-01 15:11:47.502 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Updating inventory in ProviderTree for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  1 10:11:47 np0005604375 nova_compute[238794]: 2026-02-01 15:11:47.521 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing aggregate associations for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  1 10:11:47 np0005604375 nova_compute[238794]: 2026-02-01 15:11:47.541 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing trait associations for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18, traits: COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  1 10:11:47 np0005604375 nova_compute[238794]: 2026-02-01 15:11:47.556 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:11:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:11:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3519629576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:11:48 np0005604375 nova_compute[238794]: 2026-02-01 15:11:48.069 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:11:48 np0005604375 nova_compute[238794]: 2026-02-01 15:11:48.075 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:11:48 np0005604375 nova_compute[238794]: 2026-02-01 15:11:48.093 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:11:48 np0005604375 nova_compute[238794]: 2026-02-01 15:11:48.096 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:11:48 np0005604375 nova_compute[238794]: 2026-02-01 15:11:48.097 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:11:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:11:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:11:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:11:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:11:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:11:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:11:49 np0005604375 podman[242968]: 2026-02-01 15:11:49.983886605 +0000 UTC m=+0.068646568 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  1 10:11:49 np0005604375 podman[242967]: 2026-02-01 15:11:49.988857324 +0000 UTC m=+0.073466493 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  1 10:11:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:11:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2270999465' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:11:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:11:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2270999465' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:11:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:11:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:11:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:11:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:12:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  1 10:12:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  1 10:12:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:12:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  1 10:12:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:12:07.807 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:12:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:12:07.808 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:12:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:12:07.808 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:12:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  1 10:12:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  1 10:12:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:12:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  1 10:12:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:12:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:12:17
Feb  1 10:12:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:12:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:12:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'images', 'default.rgw.control']
Feb  1 10:12:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:12:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:12:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:20 np0005604375 podman[243009]: 2026-02-01 15:12:20.997148413 +0000 UTC m=+0.076759966 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Feb  1 10:12:21 np0005604375 podman[243010]: 2026-02-01 15:12:21.033579485 +0000 UTC m=+0.108592039 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  1 10:12:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:12:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:12:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:12:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:12:28 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:12:28 np0005604375 podman[243198]: 2026-02-01 15:12:28.481395106 +0000 UTC m=+0.031798774 container create c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noyce, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:12:28 np0005604375 systemd[1]: Started libpod-conmon-c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249.scope.
Feb  1 10:12:28 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:12:28 np0005604375 podman[243198]: 2026-02-01 15:12:28.561193256 +0000 UTC m=+0.111596964 container init c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Feb  1 10:12:28 np0005604375 podman[243198]: 2026-02-01 15:12:28.466206039 +0000 UTC m=+0.016609737 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:12:28 np0005604375 podman[243198]: 2026-02-01 15:12:28.565380953 +0000 UTC m=+0.115784611 container start c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  1 10:12:28 np0005604375 podman[243198]: 2026-02-01 15:12:28.568417798 +0000 UTC m=+0.118821476 container attach c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noyce, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:12:28 np0005604375 systemd[1]: libpod-c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249.scope: Deactivated successfully.
Feb  1 10:12:28 np0005604375 cool_noyce[243215]: 167 167
Feb  1 10:12:28 np0005604375 conmon[243215]: conmon c64bba6fcbd184d2374e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249.scope/container/memory.events
Feb  1 10:12:28 np0005604375 podman[243198]: 2026-02-01 15:12:28.571776293 +0000 UTC m=+0.122179961 container died c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noyce, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  1 10:12:28 np0005604375 systemd[1]: var-lib-containers-storage-overlay-4bc8d1b06e0f13fa46d62b009f306baed635bafc13d64fb1eefa8487804de3ef-merged.mount: Deactivated successfully.
Feb  1 10:12:28 np0005604375 podman[243198]: 2026-02-01 15:12:28.610244682 +0000 UTC m=+0.160648360 container remove c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noyce, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:12:28 np0005604375 systemd[1]: libpod-conmon-c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249.scope: Deactivated successfully.
Feb  1 10:12:28 np0005604375 podman[243238]: 2026-02-01 15:12:28.769283087 +0000 UTC m=+0.079616526 container create 30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:12:28 np0005604375 systemd[1]: Started libpod-conmon-30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d.scope.
Feb  1 10:12:28 np0005604375 podman[243238]: 2026-02-01 15:12:28.712977276 +0000 UTC m=+0.023310725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:12:28 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:12:28 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365b64fd99745d8018c123ca44f01eb558cf3952c19fe7f734b3a91f190b8368/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:12:28 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365b64fd99745d8018c123ca44f01eb558cf3952c19fe7f734b3a91f190b8368/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:12:28 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365b64fd99745d8018c123ca44f01eb558cf3952c19fe7f734b3a91f190b8368/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:12:28 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365b64fd99745d8018c123ca44f01eb558cf3952c19fe7f734b3a91f190b8368/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:12:28 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365b64fd99745d8018c123ca44f01eb558cf3952c19fe7f734b3a91f190b8368/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:12:28 np0005604375 podman[243238]: 2026-02-01 15:12:28.869349185 +0000 UTC m=+0.179682614 container init 30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclaren, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb  1 10:12:28 np0005604375 podman[243238]: 2026-02-01 15:12:28.876671831 +0000 UTC m=+0.187005250 container start 30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclaren, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Feb  1 10:12:28 np0005604375 podman[243238]: 2026-02-01 15:12:28.880622752 +0000 UTC m=+0.190956171 container attach 30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:12:29 np0005604375 youthful_mclaren[243254]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:12:29 np0005604375 youthful_mclaren[243254]: --> All data devices are unavailable
Feb  1 10:12:29 np0005604375 systemd[1]: libpod-30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d.scope: Deactivated successfully.
Feb  1 10:12:29 np0005604375 podman[243238]: 2026-02-01 15:12:29.276241117 +0000 UTC m=+0.586574516 container died 30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclaren, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  1 10:12:29 np0005604375 systemd[1]: var-lib-containers-storage-overlay-365b64fd99745d8018c123ca44f01eb558cf3952c19fe7f734b3a91f190b8368-merged.mount: Deactivated successfully.
Feb  1 10:12:29 np0005604375 podman[243238]: 2026-02-01 15:12:29.317772303 +0000 UTC m=+0.628105732 container remove 30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 10:12:29 np0005604375 systemd[1]: libpod-conmon-30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d.scope: Deactivated successfully.
Feb  1 10:12:29 np0005604375 podman[243350]: 2026-02-01 15:12:29.829723213 +0000 UTC m=+0.070331325 container create dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leakey, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:12:29 np0005604375 systemd[1]: Started libpod-conmon-dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50.scope.
Feb  1 10:12:29 np0005604375 podman[243350]: 2026-02-01 15:12:29.779450802 +0000 UTC m=+0.020058974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:12:29 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:12:29 np0005604375 podman[243350]: 2026-02-01 15:12:29.910007537 +0000 UTC m=+0.150615659 container init dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leakey, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  1 10:12:29 np0005604375 podman[243350]: 2026-02-01 15:12:29.919097942 +0000 UTC m=+0.159706054 container start dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leakey, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:12:29 np0005604375 awesome_leakey[243367]: 167 167
Feb  1 10:12:29 np0005604375 podman[243350]: 2026-02-01 15:12:29.92435702 +0000 UTC m=+0.164965142 container attach dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leakey, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Feb  1 10:12:29 np0005604375 systemd[1]: libpod-dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50.scope: Deactivated successfully.
Feb  1 10:12:29 np0005604375 conmon[243367]: conmon dabeec8e2d7f5b84929c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50.scope/container/memory.events
Feb  1 10:12:29 np0005604375 podman[243350]: 2026-02-01 15:12:29.925585164 +0000 UTC m=+0.166193286 container died dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Feb  1 10:12:29 np0005604375 systemd[1]: var-lib-containers-storage-overlay-51c11c545c082386c80a42ba418b2d0546e70709e7acb3136ec469c9f88848a4-merged.mount: Deactivated successfully.
Feb  1 10:12:30 np0005604375 podman[243350]: 2026-02-01 15:12:30.01342894 +0000 UTC m=+0.254037032 container remove dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  1 10:12:30 np0005604375 systemd[1]: libpod-conmon-dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50.scope: Deactivated successfully.
Feb  1 10:12:30 np0005604375 podman[243391]: 2026-02-01 15:12:30.174863462 +0000 UTC m=+0.054070819 container create 62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_almeida, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  1 10:12:30 np0005604375 systemd[1]: Started libpod-conmon-62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084.scope.
Feb  1 10:12:30 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:12:30 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567919ff856346042910cc02a6bbfdd7f6848bbfcefb2d261fe39cffce8751bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:12:30 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567919ff856346042910cc02a6bbfdd7f6848bbfcefb2d261fe39cffce8751bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:12:30 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567919ff856346042910cc02a6bbfdd7f6848bbfcefb2d261fe39cffce8751bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:12:30 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567919ff856346042910cc02a6bbfdd7f6848bbfcefb2d261fe39cffce8751bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:12:30 np0005604375 podman[243391]: 2026-02-01 15:12:30.14665943 +0000 UTC m=+0.025866837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:12:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:30 np0005604375 podman[243391]: 2026-02-01 15:12:30.259531598 +0000 UTC m=+0.138738985 container init 62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_almeida, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:12:30 np0005604375 podman[243391]: 2026-02-01 15:12:30.268569822 +0000 UTC m=+0.147777169 container start 62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:12:30 np0005604375 podman[243391]: 2026-02-01 15:12:30.272488082 +0000 UTC m=+0.151695399 container attach 62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]: {
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:    "0": [
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:        {
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "devices": [
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "/dev/loop3"
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            ],
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_name": "ceph_lv0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_size": "21470642176",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "name": "ceph_lv0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "tags": {
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.cluster_name": "ceph",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.crush_device_class": "",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.encrypted": "0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.objectstore": "bluestore",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.osd_id": "0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.type": "block",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.vdo": "0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.with_tpm": "0"
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            },
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "type": "block",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "vg_name": "ceph_vg0"
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:        }
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:    ],
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:    "1": [
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:        {
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "devices": [
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "/dev/loop4"
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            ],
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_name": "ceph_lv1",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_size": "21470642176",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "name": "ceph_lv1",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "tags": {
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.cluster_name": "ceph",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.crush_device_class": "",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.encrypted": "0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.objectstore": "bluestore",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.osd_id": "1",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.type": "block",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.vdo": "0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.with_tpm": "0"
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            },
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "type": "block",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "vg_name": "ceph_vg1"
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:        }
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:    ],
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:    "2": [
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:        {
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "devices": [
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "/dev/loop5"
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            ],
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_name": "ceph_lv2",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_size": "21470642176",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "name": "ceph_lv2",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "tags": {
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.cluster_name": "ceph",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.crush_device_class": "",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.encrypted": "0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.objectstore": "bluestore",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.osd_id": "2",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.type": "block",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.vdo": "0",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:                "ceph.with_tpm": "0"
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            },
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "type": "block",
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:            "vg_name": "ceph_vg2"
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:        }
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]:    ]
Feb  1 10:12:30 np0005604375 beautiful_almeida[243408]: }
Feb  1 10:12:30 np0005604375 systemd[1]: libpod-62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084.scope: Deactivated successfully.
Feb  1 10:12:30 np0005604375 podman[243391]: 2026-02-01 15:12:30.568286065 +0000 UTC m=+0.447493382 container died 62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_almeida, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:12:30 np0005604375 systemd[1]: var-lib-containers-storage-overlay-567919ff856346042910cc02a6bbfdd7f6848bbfcefb2d261fe39cffce8751bf-merged.mount: Deactivated successfully.
Feb  1 10:12:30 np0005604375 podman[243391]: 2026-02-01 15:12:30.606151448 +0000 UTC m=+0.485358765 container remove 62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_almeida, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  1 10:12:30 np0005604375 systemd[1]: libpod-conmon-62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084.scope: Deactivated successfully.
Feb  1 10:12:31 np0005604375 podman[243490]: 2026-02-01 15:12:31.028941276 +0000 UTC m=+0.037812833 container create f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcclintock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  1 10:12:31 np0005604375 systemd[1]: Started libpod-conmon-f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560.scope.
Feb  1 10:12:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:12:31 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:12:31 np0005604375 podman[243490]: 2026-02-01 15:12:31.013360048 +0000 UTC m=+0.022231615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:12:31 np0005604375 podman[243490]: 2026-02-01 15:12:31.113092578 +0000 UTC m=+0.121964135 container init f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:12:31 np0005604375 podman[243490]: 2026-02-01 15:12:31.117396079 +0000 UTC m=+0.126267606 container start f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  1 10:12:31 np0005604375 podman[243490]: 2026-02-01 15:12:31.120260169 +0000 UTC m=+0.129131746 container attach f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcclintock, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:12:31 np0005604375 exciting_mcclintock[243506]: 167 167
Feb  1 10:12:31 np0005604375 systemd[1]: libpod-f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560.scope: Deactivated successfully.
Feb  1 10:12:31 np0005604375 podman[243490]: 2026-02-01 15:12:31.121060701 +0000 UTC m=+0.129932228 container died f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:12:31 np0005604375 systemd[1]: var-lib-containers-storage-overlay-81562602b2ff7881f79a8f69a17e6f55e696f1c3419f324da2ad513a02b0f967-merged.mount: Deactivated successfully.
Feb  1 10:12:31 np0005604375 podman[243490]: 2026-02-01 15:12:31.155460267 +0000 UTC m=+0.164331794 container remove f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcclintock, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:12:31 np0005604375 systemd[1]: libpod-conmon-f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560.scope: Deactivated successfully.
Feb  1 10:12:31 np0005604375 podman[243530]: 2026-02-01 15:12:31.325925522 +0000 UTC m=+0.042485943 container create 15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goldwasser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:12:31 np0005604375 systemd[1]: Started libpod-conmon-15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495.scope.
Feb  1 10:12:31 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:12:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1790bf49b134e0eb8bf4fb05f7caf6844475576b384e2e2e735cde68636a5ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:12:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1790bf49b134e0eb8bf4fb05f7caf6844475576b384e2e2e735cde68636a5ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:12:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1790bf49b134e0eb8bf4fb05f7caf6844475576b384e2e2e735cde68636a5ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:12:31 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1790bf49b134e0eb8bf4fb05f7caf6844475576b384e2e2e735cde68636a5ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:12:31 np0005604375 podman[243530]: 2026-02-01 15:12:31.30983076 +0000 UTC m=+0.026391231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:12:31 np0005604375 podman[243530]: 2026-02-01 15:12:31.429106018 +0000 UTC m=+0.145666469 container init 15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  1 10:12:31 np0005604375 podman[243530]: 2026-02-01 15:12:31.436718472 +0000 UTC m=+0.153278893 container start 15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goldwasser, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:12:31 np0005604375 podman[243530]: 2026-02-01 15:12:31.440117027 +0000 UTC m=+0.156677448 container attach 15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  1 10:12:32 np0005604375 lvm[243625]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:12:32 np0005604375 lvm[243625]: VG ceph_vg1 finished
Feb  1 10:12:32 np0005604375 lvm[243622]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:12:32 np0005604375 lvm[243622]: VG ceph_vg0 finished
Feb  1 10:12:32 np0005604375 lvm[243627]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:12:32 np0005604375 lvm[243627]: VG ceph_vg2 finished
Feb  1 10:12:32 np0005604375 lvm[243628]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:12:32 np0005604375 lvm[243628]: VG ceph_vg1 finished
Feb  1 10:12:32 np0005604375 hungry_goldwasser[243546]: {}
Feb  1 10:12:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:32 np0005604375 systemd[1]: libpod-15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495.scope: Deactivated successfully.
Feb  1 10:12:32 np0005604375 systemd[1]: libpod-15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495.scope: Consumed 1.150s CPU time.
Feb  1 10:12:32 np0005604375 podman[243530]: 2026-02-01 15:12:32.271959696 +0000 UTC m=+0.988520137 container died 15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:12:32 np0005604375 systemd[1]: var-lib-containers-storage-overlay-e1790bf49b134e0eb8bf4fb05f7caf6844475576b384e2e2e735cde68636a5ed-merged.mount: Deactivated successfully.
Feb  1 10:12:32 np0005604375 podman[243530]: 2026-02-01 15:12:32.317193296 +0000 UTC m=+1.033753747 container remove 15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goldwasser, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:12:32 np0005604375 systemd[1]: libpod-conmon-15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495.scope: Deactivated successfully.
Feb  1 10:12:32 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:12:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:12:32 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:12:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:12:33 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:12:33 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:12:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:12:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:12:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Feb  1 10:12:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Feb  1 10:12:41 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Feb  1 10:12:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Feb  1 10:12:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Feb  1 10:12:42 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Feb  1 10:12:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Feb  1 10:12:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Feb  1 10:12:43 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Feb  1 10:12:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Feb  1 10:12:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Feb  1 10:12:45 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Feb  1 10:12:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:12:46 np0005604375 nova_compute[238794]: 2026-02-01 15:12:46.095 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:12:46 np0005604375 nova_compute[238794]: 2026-02-01 15:12:46.096 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:12:46 np0005604375 nova_compute[238794]: 2026-02-01 15:12:46.221 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:12:46 np0005604375 nova_compute[238794]: 2026-02-01 15:12:46.222 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:12:46 np0005604375 nova_compute[238794]: 2026-02-01 15:12:46.222 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:12:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 8.5 MiB/s wr, 78 op/s
Feb  1 10:12:46 np0005604375 nova_compute[238794]: 2026-02-01 15:12:46.306 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:12:46 np0005604375 nova_compute[238794]: 2026-02-01 15:12:46.306 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:12:46 np0005604375 nova_compute[238794]: 2026-02-01 15:12:46.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:12:46 np0005604375 nova_compute[238794]: 2026-02-01 15:12:46.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:12:46 np0005604375 nova_compute[238794]: 2026-02-01 15:12:46.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:12:47 np0005604375 nova_compute[238794]: 2026-02-01 15:12:47.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:12:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Feb  1 10:12:48 np0005604375 nova_compute[238794]: 2026-02-01 15:12:48.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:12:48 np0005604375 nova_compute[238794]: 2026-02-01 15:12:48.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:12:48 np0005604375 nova_compute[238794]: 2026-02-01 15:12:48.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:12:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:12:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:12:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:12:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:12:48 np0005604375 nova_compute[238794]: 2026-02-01 15:12:48.731 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:12:48 np0005604375 nova_compute[238794]: 2026-02-01 15:12:48.731 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:12:48 np0005604375 nova_compute[238794]: 2026-02-01 15:12:48.731 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:12:48 np0005604375 nova_compute[238794]: 2026-02-01 15:12:48.732 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:12:48 np0005604375 nova_compute[238794]: 2026-02-01 15:12:48.732 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:12:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:12:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:12:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:12:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2429162025' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:12:49 np0005604375 nova_compute[238794]: 2026-02-01 15:12:49.223 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:12:49 np0005604375 nova_compute[238794]: 2026-02-01 15:12:49.337 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:12:49 np0005604375 nova_compute[238794]: 2026-02-01 15:12:49.339 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5124MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:12:49 np0005604375 nova_compute[238794]: 2026-02-01 15:12:49.339 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:12:49 np0005604375 nova_compute[238794]: 2026-02-01 15:12:49.339 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:12:49 np0005604375 nova_compute[238794]: 2026-02-01 15:12:49.486 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:12:49 np0005604375 nova_compute[238794]: 2026-02-01 15:12:49.486 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:12:49 np0005604375 nova_compute[238794]: 2026-02-01 15:12:49.519 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:12:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:12:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556144620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:12:50 np0005604375 nova_compute[238794]: 2026-02-01 15:12:50.020 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:12:50 np0005604375 nova_compute[238794]: 2026-02-01 15:12:50.027 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:12:50 np0005604375 nova_compute[238794]: 2026-02-01 15:12:50.194 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:12:50 np0005604375 nova_compute[238794]: 2026-02-01 15:12:50.195 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:12:50 np0005604375 nova_compute[238794]: 2026-02-01 15:12:50.195 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:12:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Feb  1 10:12:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:12:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3098049236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:12:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:12:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3098049236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:12:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:12:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Feb  1 10:12:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Feb  1 10:12:51 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Feb  1 10:12:51 np0005604375 podman[243713]: 2026-02-01 15:12:51.991884402 +0000 UTC m=+0.076031435 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  1 10:12:52 np0005604375 podman[243714]: 2026-02-01 15:12:52.03099485 +0000 UTC m=+0.114989649 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  1 10:12:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Feb  1 10:12:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.7 MiB/s wr, 43 op/s
Feb  1 10:12:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:12:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:12:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:13:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:13:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:13:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:13:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:13:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:13:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:07.161+0000 7f8267782640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/20a8c9a2-cfa0-44d6-b2f2-a4472dc96dd6'.
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "format": "json"}]: dispatch
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:13:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:13:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:13:07.808 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:13:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:13:07.809 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:13:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:13:07.809 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "915665b7-ff70-4faa-88a3-0d32becf6f29", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:915665b7-ff70-4faa-88a3-0d32becf6f29, vol_name:cephfs) < ""
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/915665b7-ff70-4faa-88a3-0d32becf6f29/49597151-cba1-48e5-979e-cda79388de34'.
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/915665b7-ff70-4faa-88a3-0d32becf6f29/.meta.tmp'
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/915665b7-ff70-4faa-88a3-0d32becf6f29/.meta.tmp' to config b'/volumes/_nogroup/915665b7-ff70-4faa-88a3-0d32becf6f29/.meta'
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:915665b7-ff70-4faa-88a3-0d32becf6f29, vol_name:cephfs) < ""
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "915665b7-ff70-4faa-88a3-0d32becf6f29", "format": "json"}]: dispatch
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:915665b7-ff70-4faa-88a3-0d32becf6f29, vol_name:cephfs) < ""
Feb  1 10:13:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:915665b7-ff70-4faa-88a3-0d32becf6f29, vol_name:cephfs) < ""
Feb  1 10:13:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:13:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:13:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:13:08 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:13:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb  1 10:13:08 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/0d56fdbc-9c41-43b1-9fb0-657d8d49f4ff'.
Feb  1 10:13:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta.tmp'
Feb  1 10:13:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta.tmp' to config b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta'
Feb  1 10:13:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb  1 10:13:08 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "format": "json"}]: dispatch
Feb  1 10:13:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb  1 10:13:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb  1 10:13:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:13:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:13:08 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.viosrg(active, since 22m)
Feb  1 10:13:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:13:10 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:13:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb  1 10:13:10 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:13:10.921 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  1 10:13:10 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:13:10.923 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  1 10:13:10 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bde02bc8-059b-4cad-a246-c96036843cf2/beffe961-0742-4156-ad43-3b52285fd640'.
Feb  1 10:13:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bde02bc8-059b-4cad-a246-c96036843cf2/.meta.tmp'
Feb  1 10:13:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bde02bc8-059b-4cad-a246-c96036843cf2/.meta.tmp' to config b'/volumes/_nogroup/bde02bc8-059b-4cad-a246-c96036843cf2/.meta'
Feb  1 10:13:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb  1 10:13:10 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "format": "json"}]: dispatch
Feb  1 10:13:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb  1 10:13:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb  1 10:13:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:13:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:13:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:13:12 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "snap_name": "6676d9d7-897a-4be1-9444-f94c4c5eb9e9", "format": "json"}]: dispatch
Feb  1 10:13:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6676d9d7-897a-4be1-9444-f94c4c5eb9e9, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb  1 10:13:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6676d9d7-897a-4be1-9444-f94c4c5eb9e9, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb  1 10:13:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s wr, 2 op/s
Feb  1 10:13:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s wr, 2 op/s
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "new_size": 2147483648, "format": "json"}]: dispatch
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "915665b7-ff70-4faa-88a3-0d32becf6f29", "format": "json"}]: dispatch
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:915665b7-ff70-4faa-88a3-0d32becf6f29, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:915665b7-ff70-4faa-88a3-0d32becf6f29, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '915665b7-ff70-4faa-88a3-0d32becf6f29' of type subvolume
Feb  1 10:13:15 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.699+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '915665b7-ff70-4faa-88a3-0d32becf6f29' of type subvolume
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "915665b7-ff70-4faa-88a3-0d32becf6f29", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:915665b7-ff70-4faa-88a3-0d32becf6f29, vol_name:cephfs) < ""
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/915665b7-ff70-4faa-88a3-0d32becf6f29'' moved to trashcan
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:915665b7-ff70-4faa-88a3-0d32becf6f29, vol_name:cephfs) < ""
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.717+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.717+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.717+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.717+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.717+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.736+0000 7f8269786640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.736+0000 7f8269786640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.736+0000 7f8269786640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.736+0000 7f8269786640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:15 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.736+0000 7f8269786640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:13:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:13:16 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "format": "json"}]: dispatch
Feb  1 10:13:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bde02bc8-059b-4cad-a246-c96036843cf2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bde02bc8-059b-4cad-a246-c96036843cf2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:16 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bde02bc8-059b-4cad-a246-c96036843cf2' of type subvolume
Feb  1 10:13:16 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:16.127+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bde02bc8-059b-4cad-a246-c96036843cf2' of type subvolume
Feb  1 10:13:16 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb  1 10:13:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bde02bc8-059b-4cad-a246-c96036843cf2'' moved to trashcan
Feb  1 10:13:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:13:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb  1 10:13:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 4 op/s
Feb  1 10:13:17 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.viosrg(active, since 22m)
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "snap_name": "6676d9d7-897a-4be1-9444-f94c4c5eb9e9_89fefdfc-5a05-4ed0-8819-b63c5620160b", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6676d9d7-897a-4be1-9444-f94c4c5eb9e9_89fefdfc-5a05-4ed0-8819-b63c5620160b, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta.tmp'
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta.tmp' to config b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta'
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6676d9d7-897a-4be1-9444-f94c4c5eb9e9_89fefdfc-5a05-4ed0-8819-b63c5620160b, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "snap_name": "6676d9d7-897a-4be1-9444-f94c4c5eb9e9", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6676d9d7-897a-4be1-9444-f94c4c5eb9e9, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta.tmp'
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta.tmp' to config b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta'
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6676d9d7-897a-4be1-9444-f94c4c5eb9e9, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:13:17
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.meta']
Feb  1 10:13:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 4 op/s
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:13:18 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:13:18.925 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:13:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:13:19 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:13:19 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb  1 10:13:19 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bd6fb31c-809d-4c83-9761-28c8527a3b81/b210dff2-6407-4abe-a039-ae386c608b9f'.
Feb  1 10:13:19 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bd6fb31c-809d-4c83-9761-28c8527a3b81/.meta.tmp'
Feb  1 10:13:19 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bd6fb31c-809d-4c83-9761-28c8527a3b81/.meta.tmp' to config b'/volumes/_nogroup/bd6fb31c-809d-4c83-9761-28c8527a3b81/.meta'
Feb  1 10:13:19 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb  1 10:13:19 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "format": "json"}]: dispatch
Feb  1 10:13:19 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb  1 10:13:19 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb  1 10:13:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:13:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 4 op/s
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "55a7edb3-0742-4b44-9cb7-64d96e0ec803", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, vol_name:cephfs) < ""
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/55a7edb3-0742-4b44-9cb7-64d96e0ec803/4aec6ca1-043b-4958-8d9c-898a56795b18'.
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/55a7edb3-0742-4b44-9cb7-64d96e0ec803/.meta.tmp'
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/55a7edb3-0742-4b44-9cb7-64d96e0ec803/.meta.tmp' to config b'/volumes/_nogroup/55a7edb3-0742-4b44-9cb7-64d96e0ec803/.meta'
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, vol_name:cephfs) < ""
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "55a7edb3-0742-4b44-9cb7-64d96e0ec803", "format": "json"}]: dispatch
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, vol_name:cephfs) < ""
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, vol_name:cephfs) < ""
Feb  1 10:13:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:13:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "format": "json"}]: dispatch
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '618c0e6c-2fb1-44ff-85f4-15df368e2591' of type subvolume
Feb  1 10:13:20 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:20.747+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '618c0e6c-2fb1-44ff-85f4-15df368e2591' of type subvolume
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591'' moved to trashcan
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:13:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb  1 10:13:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:13:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 26 KiB/s wr, 8 op/s
Feb  1 10:13:22 np0005604375 podman[243795]: 2026-02-01 15:13:22.977363795 +0000 UTC m=+0.065612901 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Feb  1 10:13:22 np0005604375 podman[243794]: 2026-02-01 15:13:22.978772585 +0000 UTC m=+0.070909890 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Feb  1 10:13:23 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "new_size": 2147483648, "format": "json"}]: dispatch
Feb  1 10:13:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb  1 10:13:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb  1 10:13:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 19 KiB/s wr, 6 op/s
Feb  1 10:13:24 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "format": "json"}]: dispatch
Feb  1 10:13:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:24 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd6fb31c-809d-4c83-9761-28c8527a3b81' of type subvolume
Feb  1 10:13:24 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:24.716+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd6fb31c-809d-4c83-9761-28c8527a3b81' of type subvolume
Feb  1 10:13:24 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb  1 10:13:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bd6fb31c-809d-4c83-9761-28c8527a3b81'' moved to trashcan
Feb  1 10:13:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:13:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb  1 10:13:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Feb  1 10:13:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Feb  1 10:13:25 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Feb  1 10:13:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:13:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 8 op/s
Feb  1 10:13:26 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "55a7edb3-0742-4b44-9cb7-64d96e0ec803", "format": "json"}]: dispatch
Feb  1 10:13:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:26 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '55a7edb3-0742-4b44-9cb7-64d96e0ec803' of type subvolume
Feb  1 10:13:26 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:26.362+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '55a7edb3-0742-4b44-9cb7-64d96e0ec803' of type subvolume
Feb  1 10:13:26 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "55a7edb3-0742-4b44-9cb7-64d96e0ec803", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, vol_name:cephfs) < ""
Feb  1 10:13:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/55a7edb3-0742-4b44-9cb7-64d96e0ec803'' moved to trashcan
Feb  1 10:13:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:13:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, vol_name:cephfs) < ""
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659498995322459 of space, bias 1.0, pg target 0.19978496985967376 quantized to 32 (current 32)
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.837368437979474e-06 of space, bias 4.0, pg target 0.00940484212557537 quantized to 16 (current 16)
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 8 op/s
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3147dea9-81aa-476a-8ff6-685b8fe5fd2e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, vol_name:cephfs) < ""
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/3147dea9-81aa-476a-8ff6-685b8fe5fd2e/cc27705c-3e9e-4106-8c7b-7566003143da'.
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3147dea9-81aa-476a-8ff6-685b8fe5fd2e/.meta.tmp'
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3147dea9-81aa-476a-8ff6-685b8fe5fd2e/.meta.tmp' to config b'/volumes/_nogroup/3147dea9-81aa-476a-8ff6-685b8fe5fd2e/.meta'
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, vol_name:cephfs) < ""
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3147dea9-81aa-476a-8ff6-685b8fe5fd2e", "format": "json"}]: dispatch
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, vol_name:cephfs) < ""
Feb  1 10:13:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, vol_name:cephfs) < ""
Feb  1 10:13:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:13:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:13:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 8 op/s
Feb  1 10:13:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:13:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Feb  1 10:13:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Feb  1 10:13:31 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Feb  1 10:13:32 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3147dea9-81aa-476a-8ff6-685b8fe5fd2e", "format": "json"}]: dispatch
Feb  1 10:13:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:32 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3147dea9-81aa-476a-8ff6-685b8fe5fd2e' of type subvolume
Feb  1 10:13:32 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:32.103+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3147dea9-81aa-476a-8ff6-685b8fe5fd2e' of type subvolume
Feb  1 10:13:32 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3147dea9-81aa-476a-8ff6-685b8fe5fd2e", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, vol_name:cephfs) < ""
Feb  1 10:13:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3147dea9-81aa-476a-8ff6-685b8fe5fd2e'' moved to trashcan
Feb  1 10:13:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:13:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, vol_name:cephfs) < ""
Feb  1 10:13:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 23 KiB/s wr, 8 op/s
Feb  1 10:13:33 np0005604375 podman[243934]: 2026-02-01 15:13:33.008688026 +0000 UTC m=+0.083153674 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  1 10:13:33 np0005604375 podman[243934]: 2026-02-01 15:13:33.162737807 +0000 UTC m=+0.237203395 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  1 10:13:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:13:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:13:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:13:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:13:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 665 B/s rd, 20 KiB/s wr, 7 op/s
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:13:34 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b6c72970-f609-412a-968d-5d3fe02bddc0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:13:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6c72970-f609-412a-968d-5d3fe02bddc0, vol_name:cephfs) < ""
Feb  1 10:13:34 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b6c72970-f609-412a-968d-5d3fe02bddc0/d2950875-19b2-4633-8278-a9181fa57d3d'.
Feb  1 10:13:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b6c72970-f609-412a-968d-5d3fe02bddc0/.meta.tmp'
Feb  1 10:13:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b6c72970-f609-412a-968d-5d3fe02bddc0/.meta.tmp' to config b'/volumes/_nogroup/b6c72970-f609-412a-968d-5d3fe02bddc0/.meta'
Feb  1 10:13:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6c72970-f609-412a-968d-5d3fe02bddc0, vol_name:cephfs) < ""
Feb  1 10:13:34 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b6c72970-f609-412a-968d-5d3fe02bddc0", "format": "json"}]: dispatch
Feb  1 10:13:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6c72970-f609-412a-968d-5d3fe02bddc0, vol_name:cephfs) < ""
Feb  1 10:13:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6c72970-f609-412a-968d-5d3fe02bddc0, vol_name:cephfs) < ""
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:13:34 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:13:34 np0005604375 podman[244266]: 2026-02-01 15:13:34.881062356 +0000 UTC m=+0.045680612 container create 98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Feb  1 10:13:34 np0005604375 systemd[1]: Started libpod-conmon-98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0.scope.
Feb  1 10:13:34 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:13:34 np0005604375 podman[244266]: 2026-02-01 15:13:34.863094613 +0000 UTC m=+0.027712909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:13:34 np0005604375 podman[244266]: 2026-02-01 15:13:34.969667072 +0000 UTC m=+0.134285358 container init 98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  1 10:13:34 np0005604375 podman[244266]: 2026-02-01 15:13:34.975936498 +0000 UTC m=+0.140554794 container start 98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  1 10:13:34 np0005604375 podman[244266]: 2026-02-01 15:13:34.979917079 +0000 UTC m=+0.144535335 container attach 98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:13:34 np0005604375 objective_mccarthy[244283]: 167 167
Feb  1 10:13:34 np0005604375 systemd[1]: libpod-98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0.scope: Deactivated successfully.
Feb  1 10:13:34 np0005604375 podman[244266]: 2026-02-01 15:13:34.981433282 +0000 UTC m=+0.146051538 container died 98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mccarthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:13:35 np0005604375 systemd[1]: var-lib-containers-storage-overlay-70ec4744502c9c0b54fc2afb407b35538ed15da40ad756020f8f33434ac9bb78-merged.mount: Deactivated successfully.
Feb  1 10:13:35 np0005604375 podman[244266]: 2026-02-01 15:13:35.080091579 +0000 UTC m=+0.244709845 container remove 98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mccarthy, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 10:13:35 np0005604375 systemd[1]: libpod-conmon-98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0.scope: Deactivated successfully.
Feb  1 10:13:35 np0005604375 podman[244306]: 2026-02-01 15:13:35.227374511 +0000 UTC m=+0.040622411 container create 9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:13:35 np0005604375 systemd[1]: Started libpod-conmon-9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7.scope.
Feb  1 10:13:35 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:13:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c8330163e16f4feef19450a1a9319ad6b30a3b0a2ebc98628e41264a3bd45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:13:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c8330163e16f4feef19450a1a9319ad6b30a3b0a2ebc98628e41264a3bd45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:13:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c8330163e16f4feef19450a1a9319ad6b30a3b0a2ebc98628e41264a3bd45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:13:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c8330163e16f4feef19450a1a9319ad6b30a3b0a2ebc98628e41264a3bd45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:13:35 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c8330163e16f4feef19450a1a9319ad6b30a3b0a2ebc98628e41264a3bd45/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:13:35 np0005604375 podman[244306]: 2026-02-01 15:13:35.208356097 +0000 UTC m=+0.021603997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:13:35 np0005604375 podman[244306]: 2026-02-01 15:13:35.357184662 +0000 UTC m=+0.170432632 container init 9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:13:35 np0005604375 podman[244306]: 2026-02-01 15:13:35.363843389 +0000 UTC m=+0.177091259 container start 9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  1 10:13:35 np0005604375 podman[244306]: 2026-02-01 15:13:35.368039746 +0000 UTC m=+0.181287636 container attach 9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  1 10:13:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dc838023-ada6-4f22-947b-32f93b678270", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:13:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dc838023-ada6-4f22-947b-32f93b678270, vol_name:cephfs) < ""
Feb  1 10:13:35 np0005604375 gallant_kapitsa[244322]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:13:35 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  1 10:13:35 np0005604375 gallant_kapitsa[244322]: --> All data devices are unavailable
Feb  1 10:13:35 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/dc838023-ada6-4f22-947b-32f93b678270/fec65527-d4a9-4f7c-a85e-6e18557fd6b3'.
Feb  1 10:13:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dc838023-ada6-4f22-947b-32f93b678270/.meta.tmp'
Feb  1 10:13:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dc838023-ada6-4f22-947b-32f93b678270/.meta.tmp' to config b'/volumes/_nogroup/dc838023-ada6-4f22-947b-32f93b678270/.meta'
Feb  1 10:13:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dc838023-ada6-4f22-947b-32f93b678270, vol_name:cephfs) < ""
Feb  1 10:13:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dc838023-ada6-4f22-947b-32f93b678270", "format": "json"}]: dispatch
Feb  1 10:13:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dc838023-ada6-4f22-947b-32f93b678270, vol_name:cephfs) < ""
Feb  1 10:13:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dc838023-ada6-4f22-947b-32f93b678270, vol_name:cephfs) < ""
Feb  1 10:13:35 np0005604375 systemd[1]: libpod-9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7.scope: Deactivated successfully.
Feb  1 10:13:35 np0005604375 podman[244306]: 2026-02-01 15:13:35.818474491 +0000 UTC m=+0.631722361 container died 9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:13:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:13:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:13:35 np0005604375 systemd[1]: var-lib-containers-storage-overlay-be5c8330163e16f4feef19450a1a9319ad6b30a3b0a2ebc98628e41264a3bd45-merged.mount: Deactivated successfully.
Feb  1 10:13:35 np0005604375 podman[244306]: 2026-02-01 15:13:35.867608369 +0000 UTC m=+0.680856259 container remove 9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:13:35 np0005604375 systemd[1]: libpod-conmon-9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7.scope: Deactivated successfully.
Feb  1 10:13:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:13:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Feb  1 10:13:36 np0005604375 podman[244418]: 2026-02-01 15:13:36.302685092 +0000 UTC m=+0.059698035 container create 54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kirch, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  1 10:13:36 np0005604375 systemd[1]: Started libpod-conmon-54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c.scope.
Feb  1 10:13:36 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:13:36 np0005604375 podman[244418]: 2026-02-01 15:13:36.277243099 +0000 UTC m=+0.034256142 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:13:36 np0005604375 podman[244418]: 2026-02-01 15:13:36.377140621 +0000 UTC m=+0.134153604 container init 54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kirch, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:13:36 np0005604375 podman[244418]: 2026-02-01 15:13:36.381804952 +0000 UTC m=+0.138817885 container start 54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  1 10:13:36 np0005604375 pedantic_kirch[244435]: 167 167
Feb  1 10:13:36 np0005604375 podman[244418]: 2026-02-01 15:13:36.386942376 +0000 UTC m=+0.143955329 container attach 54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kirch, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:13:36 np0005604375 systemd[1]: libpod-54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c.scope: Deactivated successfully.
Feb  1 10:13:36 np0005604375 podman[244418]: 2026-02-01 15:13:36.38743217 +0000 UTC m=+0.144445133 container died 54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kirch, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:13:36 np0005604375 systemd[1]: var-lib-containers-storage-overlay-e9137cacc871f2559aed242ca0750c403cc9737bb3c7c1023d3f60b851e089c1-merged.mount: Deactivated successfully.
Feb  1 10:13:36 np0005604375 podman[244418]: 2026-02-01 15:13:36.434536271 +0000 UTC m=+0.191549234 container remove 54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kirch, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  1 10:13:36 np0005604375 systemd[1]: libpod-conmon-54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c.scope: Deactivated successfully.
Feb  1 10:13:36 np0005604375 podman[244459]: 2026-02-01 15:13:36.614834638 +0000 UTC m=+0.048276065 container create a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  1 10:13:36 np0005604375 systemd[1]: Started libpod-conmon-a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7.scope.
Feb  1 10:13:36 np0005604375 podman[244459]: 2026-02-01 15:13:36.592268615 +0000 UTC m=+0.025710082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:13:36 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:13:36 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a109bd3646ad2e7fecadc408232fb746f32a6b4aaed215300645200095faa8dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:13:36 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a109bd3646ad2e7fecadc408232fb746f32a6b4aaed215300645200095faa8dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:13:36 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a109bd3646ad2e7fecadc408232fb746f32a6b4aaed215300645200095faa8dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:13:36 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a109bd3646ad2e7fecadc408232fb746f32a6b4aaed215300645200095faa8dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:13:36 np0005604375 podman[244459]: 2026-02-01 15:13:36.711088628 +0000 UTC m=+0.144530075 container init a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Feb  1 10:13:36 np0005604375 podman[244459]: 2026-02-01 15:13:36.719400231 +0000 UTC m=+0.152841658 container start a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  1 10:13:36 np0005604375 podman[244459]: 2026-02-01 15:13:36.72363888 +0000 UTC m=+0.157080307 container attach a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]: {
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:    "0": [
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:        {
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "devices": [
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "/dev/loop3"
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            ],
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_name": "ceph_lv0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_size": "21470642176",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "name": "ceph_lv0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "tags": {
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.cluster_name": "ceph",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.crush_device_class": "",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.encrypted": "0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.objectstore": "bluestore",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.osd_id": "0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.type": "block",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.vdo": "0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.with_tpm": "0"
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            },
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "type": "block",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "vg_name": "ceph_vg0"
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:        }
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:    ],
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:    "1": [
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:        {
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "devices": [
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "/dev/loop4"
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            ],
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_name": "ceph_lv1",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_size": "21470642176",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "name": "ceph_lv1",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "tags": {
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.cluster_name": "ceph",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.crush_device_class": "",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.encrypted": "0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.objectstore": "bluestore",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.osd_id": "1",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.type": "block",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.vdo": "0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.with_tpm": "0"
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            },
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "type": "block",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "vg_name": "ceph_vg1"
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:        }
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:    ],
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:    "2": [
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:        {
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "devices": [
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "/dev/loop5"
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            ],
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_name": "ceph_lv2",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_size": "21470642176",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "name": "ceph_lv2",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "tags": {
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.cluster_name": "ceph",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.crush_device_class": "",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.encrypted": "0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.objectstore": "bluestore",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.osd_id": "2",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.type": "block",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.vdo": "0",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:                "ceph.with_tpm": "0"
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            },
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "type": "block",
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:            "vg_name": "ceph_vg2"
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:        }
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]:    ]
Feb  1 10:13:36 np0005604375 zen_kapitsa[244475]: }
Feb  1 10:13:37 np0005604375 systemd[1]: libpod-a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7.scope: Deactivated successfully.
Feb  1 10:13:37 np0005604375 podman[244459]: 2026-02-01 15:13:37.004973962 +0000 UTC m=+0.438415389 container died a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:13:37 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a109bd3646ad2e7fecadc408232fb746f32a6b4aaed215300645200095faa8dc-merged.mount: Deactivated successfully.
Feb  1 10:13:37 np0005604375 podman[244459]: 2026-02-01 15:13:37.050360445 +0000 UTC m=+0.483801852 container remove a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  1 10:13:37 np0005604375 systemd[1]: libpod-conmon-a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7.scope: Deactivated successfully.
Feb  1 10:13:37 np0005604375 podman[244558]: 2026-02-01 15:13:37.431910187 +0000 UTC m=+0.039104007 container create d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_feistel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:13:37 np0005604375 systemd[1]: Started libpod-conmon-d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134.scope.
Feb  1 10:13:37 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:13:37 np0005604375 podman[244558]: 2026-02-01 15:13:37.488204357 +0000 UTC m=+0.095398167 container init d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_feistel, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  1 10:13:37 np0005604375 podman[244558]: 2026-02-01 15:13:37.492687772 +0000 UTC m=+0.099881582 container start d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:13:37 np0005604375 podman[244558]: 2026-02-01 15:13:37.495330946 +0000 UTC m=+0.102524766 container attach d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:13:37 np0005604375 romantic_feistel[244574]: 167 167
Feb  1 10:13:37 np0005604375 systemd[1]: libpod-d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134.scope: Deactivated successfully.
Feb  1 10:13:37 np0005604375 podman[244558]: 2026-02-01 15:13:37.497191589 +0000 UTC m=+0.104385409 container died d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  1 10:13:37 np0005604375 podman[244558]: 2026-02-01 15:13:37.416437013 +0000 UTC m=+0.023630863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:13:37 np0005604375 systemd[1]: var-lib-containers-storage-overlay-687d16589ead9d3db39734e9a293238d3c82aa9dafe8bef29f066431983b402b-merged.mount: Deactivated successfully.
Feb  1 10:13:37 np0005604375 podman[244558]: 2026-02-01 15:13:37.675677385 +0000 UTC m=+0.282871235 container remove d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_feistel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:13:37 np0005604375 systemd[1]: libpod-conmon-d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134.scope: Deactivated successfully.
Feb  1 10:13:37 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c14d6f49-4f6c-4972-908b-48b473f08bc0", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:13:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, vol_name:cephfs) < ""
Feb  1 10:13:37 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c14d6f49-4f6c-4972-908b-48b473f08bc0/2f852e97-4db6-4d48-a89b-3b24b6eaae9b'.
Feb  1 10:13:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c14d6f49-4f6c-4972-908b-48b473f08bc0/.meta.tmp'
Feb  1 10:13:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c14d6f49-4f6c-4972-908b-48b473f08bc0/.meta.tmp' to config b'/volumes/_nogroup/c14d6f49-4f6c-4972-908b-48b473f08bc0/.meta'
Feb  1 10:13:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, vol_name:cephfs) < ""
Feb  1 10:13:37 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c14d6f49-4f6c-4972-908b-48b473f08bc0", "format": "json"}]: dispatch
Feb  1 10:13:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, vol_name:cephfs) < ""
Feb  1 10:13:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, vol_name:cephfs) < ""
Feb  1 10:13:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:13:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:13:37 np0005604375 podman[244598]: 2026-02-01 15:13:37.82201502 +0000 UTC m=+0.041002911 container create 146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_payne, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:13:37 np0005604375 systemd[1]: Started libpod-conmon-146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220.scope.
Feb  1 10:13:37 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:13:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9efa0ddf9dd4a468818efe742a65e716ef6ad10fc7c15087328fcb0bc1e0f9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:13:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9efa0ddf9dd4a468818efe742a65e716ef6ad10fc7c15087328fcb0bc1e0f9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:13:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9efa0ddf9dd4a468818efe742a65e716ef6ad10fc7c15087328fcb0bc1e0f9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:13:37 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9efa0ddf9dd4a468818efe742a65e716ef6ad10fc7c15087328fcb0bc1e0f9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:13:37 np0005604375 podman[244598]: 2026-02-01 15:13:37.800135896 +0000 UTC m=+0.019123857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:13:37 np0005604375 podman[244598]: 2026-02-01 15:13:37.919195866 +0000 UTC m=+0.138183787 container init 146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_payne, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:13:37 np0005604375 podman[244598]: 2026-02-01 15:13:37.925259016 +0000 UTC m=+0.144246907 container start 146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_payne, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:13:37 np0005604375 podman[244598]: 2026-02-01 15:13:37.928792675 +0000 UTC m=+0.147780606 container attach 146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  1 10:13:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Feb  1 10:13:38 np0005604375 lvm[244691]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:13:38 np0005604375 lvm[244693]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:13:38 np0005604375 lvm[244693]: VG ceph_vg1 finished
Feb  1 10:13:38 np0005604375 lvm[244691]: VG ceph_vg0 finished
Feb  1 10:13:38 np0005604375 lvm[244695]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:13:38 np0005604375 lvm[244695]: VG ceph_vg2 finished
Feb  1 10:13:38 np0005604375 hardcore_payne[244614]: {}
Feb  1 10:13:38 np0005604375 systemd[1]: libpod-146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220.scope: Deactivated successfully.
Feb  1 10:13:38 np0005604375 systemd[1]: libpod-146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220.scope: Consumed 1.007s CPU time.
Feb  1 10:13:38 np0005604375 podman[244598]: 2026-02-01 15:13:38.618749639 +0000 UTC m=+0.837737560 container died 146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  1 10:13:38 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c9efa0ddf9dd4a468818efe742a65e716ef6ad10fc7c15087328fcb0bc1e0f9a-merged.mount: Deactivated successfully.
Feb  1 10:13:38 np0005604375 podman[244598]: 2026-02-01 15:13:38.682456646 +0000 UTC m=+0.901444527 container remove 146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Feb  1 10:13:38 np0005604375 systemd[1]: libpod-conmon-146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220.scope: Deactivated successfully.
Feb  1 10:13:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:13:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:13:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:13:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:13:38 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:13:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb  1 10:13:38 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/366c0b13-828c-411b-9570-e2a15ce26320'.
Feb  1 10:13:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta.tmp'
Feb  1 10:13:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta.tmp' to config b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta'
Feb  1 10:13:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb  1 10:13:38 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "format": "json"}]: dispatch
Feb  1 10:13:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb  1 10:13:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb  1 10:13:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:13:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:13:38 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:13:38 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:13:39 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dc838023-ada6-4f22-947b-32f93b678270", "format": "json"}]: dispatch
Feb  1 10:13:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dc838023-ada6-4f22-947b-32f93b678270, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dc838023-ada6-4f22-947b-32f93b678270, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:39 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dc838023-ada6-4f22-947b-32f93b678270' of type subvolume
Feb  1 10:13:39 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:39.297+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dc838023-ada6-4f22-947b-32f93b678270' of type subvolume
Feb  1 10:13:39 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dc838023-ada6-4f22-947b-32f93b678270", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dc838023-ada6-4f22-947b-32f93b678270, vol_name:cephfs) < ""
Feb  1 10:13:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dc838023-ada6-4f22-947b-32f93b678270'' moved to trashcan
Feb  1 10:13:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:13:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dc838023-ada6-4f22-947b-32f93b678270, vol_name:cephfs) < ""
Feb  1 10:13:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Feb  1 10:13:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:13:41 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2dbe4c39-4709-4ce8-bbd5-f96172636c6f", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:13:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, vol_name:cephfs) < ""
Feb  1 10:13:41 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/2dbe4c39-4709-4ce8-bbd5-f96172636c6f/6f2ed66e-0dd2-4363-8246-72938d7418e0'.
Feb  1 10:13:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2dbe4c39-4709-4ce8-bbd5-f96172636c6f/.meta.tmp'
Feb  1 10:13:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2dbe4c39-4709-4ce8-bbd5-f96172636c6f/.meta.tmp' to config b'/volumes/_nogroup/2dbe4c39-4709-4ce8-bbd5-f96172636c6f/.meta'
Feb  1 10:13:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, vol_name:cephfs) < ""
Feb  1 10:13:41 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2dbe4c39-4709-4ce8-bbd5-f96172636c6f", "format": "json"}]: dispatch
Feb  1 10:13:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, vol_name:cephfs) < ""
Feb  1 10:13:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, vol_name:cephfs) < ""
Feb  1 10:13:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:13:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:13:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 20 KiB/s wr, 5 op/s
Feb  1 10:13:42 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "snap_name": "7191da11-ab02-4a73-964f-85bc2cf8226c", "format": "json"}]: dispatch
Feb  1 10:13:42 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7191da11-ab02-4a73-964f-85bc2cf8226c, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb  1 10:13:42 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7191da11-ab02-4a73-964f-85bc2cf8226c, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb  1 10:13:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5a46fbbf-9ff7-4e87-be5d-e5e24f824870", "format": "json"}]: dispatch
Feb  1 10:13:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5a46fbbf-9ff7-4e87-be5d-e5e24f824870, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5a46fbbf-9ff7-4e87-be5d-e5e24f824870, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5a46fbbf-9ff7-4e87-be5d-e5e24f824870_8c23c0e7-dc6a-4f86-92c2-9b90697f38d7", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a46fbbf-9ff7-4e87-be5d-e5e24f824870_8c23c0e7-dc6a-4f86-92c2-9b90697f38d7, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb  1 10:13:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb  1 10:13:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a46fbbf-9ff7-4e87-be5d-e5e24f824870_8c23c0e7-dc6a-4f86-92c2-9b90697f38d7, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5a46fbbf-9ff7-4e87-be5d-e5e24f824870", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a46fbbf-9ff7-4e87-be5d-e5e24f824870, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb  1 10:13:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb  1 10:13:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a46fbbf-9ff7-4e87-be5d-e5e24f824870, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 5 op/s
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, vol_name:cephfs) < ""
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4/072f93e0-e115-4462-882b-057e50ec20e0'.
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4/.meta.tmp'
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4/.meta.tmp' to config b'/volumes/_nogroup/6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4/.meta'
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, vol_name:cephfs) < ""
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4", "format": "json"}]: dispatch
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, vol_name:cephfs) < ""
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, vol_name:cephfs) < ""
Feb  1 10:13:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:13:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c14d6f49-4f6c-4972-908b-48b473f08bc0", "format": "json"}]: dispatch
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:45 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:45.505+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c14d6f49-4f6c-4972-908b-48b473f08bc0' of type subvolume
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c14d6f49-4f6c-4972-908b-48b473f08bc0' of type subvolume
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c14d6f49-4f6c-4972-908b-48b473f08bc0", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, vol_name:cephfs) < ""
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c14d6f49-4f6c-4972-908b-48b473f08bc0'' moved to trashcan
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:13:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, vol_name:cephfs) < ""
Feb  1 10:13:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:13:46 np0005604375 nova_compute[238794]: 2026-02-01 15:13:46.191 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:13:46 np0005604375 nova_compute[238794]: 2026-02-01 15:13:46.192 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:13:46 np0005604375 nova_compute[238794]: 2026-02-01 15:13:46.192 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:13:46 np0005604375 nova_compute[238794]: 2026-02-01 15:13:46.192 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:13:46 np0005604375 nova_compute[238794]: 2026-02-01 15:13:46.258 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:13:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 34 KiB/s wr, 10 op/s
Feb  1 10:13:46 np0005604375 nova_compute[238794]: 2026-02-01 15:13:46.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:13:46 np0005604375 nova_compute[238794]: 2026-02-01 15:13:46.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:13:47 np0005604375 nova_compute[238794]: 2026-02-01 15:13:47.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 27 KiB/s wr, 7 op/s
Feb  1 10:13:48 np0005604375 nova_compute[238794]: 2026-02-01 15:13:48.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:13:48 np0005604375 nova_compute[238794]: 2026-02-01 15:13:48.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2dbe4c39-4709-4ce8-bbd5-f96172636c6f", "format": "json"}]: dispatch
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2dbe4c39-4709-4ce8-bbd5-f96172636c6f' of type subvolume
Feb  1 10:13:48 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:48.832+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2dbe4c39-4709-4ce8-bbd5-f96172636c6f' of type subvolume
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2dbe4c39-4709-4ce8-bbd5-f96172636c6f", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, vol_name:cephfs) < ""
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2dbe4c39-4709-4ce8-bbd5-f96172636c6f'' moved to trashcan
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, vol_name:cephfs) < ""
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:13:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:13:49 np0005604375 nova_compute[238794]: 2026-02-01 15:13:49.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:13:49 np0005604375 nova_compute[238794]: 2026-02-01 15:13:49.321 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:13:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Feb  1 10:13:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Feb  1 10:13:49 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Feb  1 10:13:50 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4", "format": "json"}]: dispatch
Feb  1 10:13:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:50 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:50.257+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4' of type subvolume
Feb  1 10:13:50 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4' of type subvolume
Feb  1 10:13:50 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, vol_name:cephfs) < ""
Feb  1 10:13:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4'' moved to trashcan
Feb  1 10:13:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:13:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, vol_name:cephfs) < ""
Feb  1 10:13:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 33 KiB/s wr, 8 op/s
Feb  1 10:13:50 np0005604375 nova_compute[238794]: 2026-02-01 15:13:50.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:13:50 np0005604375 nova_compute[238794]: 2026-02-01 15:13:50.341 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:13:50 np0005604375 nova_compute[238794]: 2026-02-01 15:13:50.341 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:13:50 np0005604375 nova_compute[238794]: 2026-02-01 15:13:50.341 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:13:50 np0005604375 nova_compute[238794]: 2026-02-01 15:13:50.341 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:13:50 np0005604375 nova_compute[238794]: 2026-02-01 15:13:50.342 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:13:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:13:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577318470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:13:50 np0005604375 nova_compute[238794]: 2026-02-01 15:13:50.815 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:13:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:13:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/954235550' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:13:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:13:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/954235550' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:13:50 np0005604375 nova_compute[238794]: 2026-02-01 15:13:50.966 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:13:50 np0005604375 nova_compute[238794]: 2026-02-01 15:13:50.967 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5113MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:13:50 np0005604375 nova_compute[238794]: 2026-02-01 15:13:50.967 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:13:50 np0005604375 nova_compute[238794]: 2026-02-01 15:13:50.967 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:13:51 np0005604375 nova_compute[238794]: 2026-02-01 15:13:51.029 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:13:51 np0005604375 nova_compute[238794]: 2026-02-01 15:13:51.029 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:13:51 np0005604375 nova_compute[238794]: 2026-02-01 15:13:51.043 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:13:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:13:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:13:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432741199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:13:51 np0005604375 nova_compute[238794]: 2026-02-01 15:13:51.581 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:13:51 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f853247-8fbf-41cc-a044-d26afb9421d6", "format": "json"}]: dispatch
Feb  1 10:13:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5f853247-8fbf-41cc-a044-d26afb9421d6, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:51 np0005604375 nova_compute[238794]: 2026-02-01 15:13:51.586 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:13:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5f853247-8fbf-41cc-a044-d26afb9421d6, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:51 np0005604375 nova_compute[238794]: 2026-02-01 15:13:51.598 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:13:51 np0005604375 nova_compute[238794]: 2026-02-01 15:13:51.599 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:13:51 np0005604375 nova_compute[238794]: 2026-02-01 15:13:51.600 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:13:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 31 KiB/s wr, 8 op/s
Feb  1 10:13:53 np0005604375 podman[244781]: 2026-02-01 15:13:53.97024103 +0000 UTC m=+0.054531601 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  1 10:13:54 np0005604375 podman[244782]: 2026-02-01 15:13:54.031173869 +0000 UTC m=+0.108555226 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb  1 10:13:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 31 KiB/s wr, 8 op/s
Feb  1 10:13:54 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "snap_name": "7191da11-ab02-4a73-964f-85bc2cf8226c_765a0401-6123-4825-9702-0df27b7178b8", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7191da11-ab02-4a73-964f-85bc2cf8226c_765a0401-6123-4825-9702-0df27b7178b8, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb  1 10:13:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta.tmp'
Feb  1 10:13:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta.tmp' to config b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta'
Feb  1 10:13:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7191da11-ab02-4a73-964f-85bc2cf8226c_765a0401-6123-4825-9702-0df27b7178b8, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb  1 10:13:54 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "snap_name": "7191da11-ab02-4a73-964f-85bc2cf8226c", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7191da11-ab02-4a73-964f-85bc2cf8226c, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb  1 10:13:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta.tmp'
Feb  1 10:13:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta.tmp' to config b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta'
Feb  1 10:13:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7191da11-ab02-4a73-964f-85bc2cf8226c, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb  1 10:13:55 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "fbe6d350-ea63-4fce-8220-3c83f15d3afc", "format": "json"}]: dispatch
Feb  1 10:13:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fbe6d350-ea63-4fce-8220-3c83f15d3afc, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fbe6d350-ea63-4fce-8220-3c83f15d3afc, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  1 10:13:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Feb  1 10:13:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Feb  1 10:13:56 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Feb  1 10:13:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 28 KiB/s wr, 8 op/s
Feb  1 10:13:57 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Feb  1 10:13:57 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Feb  1 10:13:57 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Feb  1 10:13:57 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "f1f3c043-c4de-4c8e-b742-6b2aba8a90bd", "format": "json"}]: dispatch
Feb  1 10:13:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f1f3c043-c4de-4c8e-b742-6b2aba8a90bd, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f1f3c043-c4de-4c8e-b742-6b2aba8a90bd, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:13:58 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "format": "json"}]: dispatch
Feb  1 10:13:58 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:58 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:13:58 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1cd77113-e6d6-4345-8483-5f1b1ddb866c' of type subvolume
Feb  1 10:13:58 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:58.138+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1cd77113-e6d6-4345-8483-5f1b1ddb866c' of type subvolume
Feb  1 10:13:58 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "force": true, "format": "json"}]: dispatch
Feb  1 10:13:58 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb  1 10:13:58 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c'' moved to trashcan
Feb  1 10:13:58 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:13:58 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb  1 10:13:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 28 KiB/s wr, 8 op/s
Feb  1 10:14:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 13 KiB/s wr, 4 op/s
Feb  1 10:14:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:14:01 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f8d76a2-690b-4d7e-8c67-40b563fa4add", "format": "json"}]: dispatch
Feb  1 10:14:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5f8d76a2-690b-4d7e-8c67-40b563fa4add, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5f8d76a2-690b-4d7e-8c67-40b563fa4add, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:01 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b6c72970-f609-412a-968d-5d3fe02bddc0", "format": "json"}]: dispatch
Feb  1 10:14:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b6c72970-f609-412a-968d-5d3fe02bddc0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b6c72970-f609-412a-968d-5d3fe02bddc0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:01 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6c72970-f609-412a-968d-5d3fe02bddc0' of type subvolume
Feb  1 10:14:01 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:01.958+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6c72970-f609-412a-968d-5d3fe02bddc0' of type subvolume
Feb  1 10:14:01 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b6c72970-f609-412a-968d-5d3fe02bddc0", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6c72970-f609-412a-968d-5d3fe02bddc0, vol_name:cephfs) < ""
Feb  1 10:14:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b6c72970-f609-412a-968d-5d3fe02bddc0'' moved to trashcan
Feb  1 10:14:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:14:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6c72970-f609-412a-968d-5d3fe02bddc0, vol_name:cephfs) < ""
Feb  1 10:14:02 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "de13f642-4dd6-425f-b2a2-695e92172306", "format": "json"}]: dispatch
Feb  1 10:14:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:de13f642-4dd6-425f-b2a2-695e92172306, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:de13f642-4dd6-425f-b2a2-695e92172306, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 26 KiB/s wr, 8 op/s
Feb  1 10:14:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 499 B/s rd, 14 KiB/s wr, 4 op/s
Feb  1 10:14:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:14:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb  1 10:14:04 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea'.
Feb  1 10:14:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/.meta.tmp'
Feb  1 10:14:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/.meta.tmp' to config b'/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/.meta'
Feb  1 10:14:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb  1 10:14:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "format": "json"}]: dispatch
Feb  1 10:14:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb  1 10:14:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb  1 10:14:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:14:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:14:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:14:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Feb  1 10:14:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Feb  1 10:14:06 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Feb  1 10:14:06 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "de13f642-4dd6-425f-b2a2-695e92172306_999c7871-ceb7-40d5-8403-0424caa50678", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:de13f642-4dd6-425f-b2a2-695e92172306_999c7871-ceb7-40d5-8403-0424caa50678, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb  1 10:14:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb  1 10:14:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:de13f642-4dd6-425f-b2a2-695e92172306_999c7871-ceb7-40d5-8403-0424caa50678, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:06 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "de13f642-4dd6-425f-b2a2-695e92172306", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:de13f642-4dd6-425f-b2a2-695e92172306, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb  1 10:14:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb  1 10:14:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:de13f642-4dd6-425f-b2a2-695e92172306, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 668 B/s rd, 30 KiB/s wr, 7 op/s
Feb  1 10:14:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:14:07.809 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:14:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:14:07.810 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:14:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:14:07.810 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:14:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "auth_id": "tempest-cephx-id-64491543", "tenant_id": "999f6f2ae9a8410ca0b94eca9aa23d7a", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:14:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-64491543, format:json, prefix:fs subvolume authorize, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, tenant_id:999f6f2ae9a8410ca0b94eca9aa23d7a, vol_name:cephfs) < ""
Feb  1 10:14:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-64491543", "format": "json"} v 0)
Feb  1 10:14:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-64491543", "format": "json"} : dispatch
Feb  1 10:14:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-64491543 with tenant 999f6f2ae9a8410ca0b94eca9aa23d7a
Feb  1 10:14:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-64491543", "caps": ["mds", "allow rw path=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c19a0244-e063-4af0-8894-414616a3f2b3", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:14:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-64491543", "caps": ["mds", "allow rw path=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c19a0244-e063-4af0-8894-414616a3f2b3", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:14:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-64491543", "caps": ["mds", "allow rw path=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c19a0244-e063-4af0-8894-414616a3f2b3", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:14:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-64491543, format:json, prefix:fs subvolume authorize, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, tenant_id:999f6f2ae9a8410ca0b94eca9aa23d7a, vol_name:cephfs) < ""
Feb  1 10:14:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-64491543", "format": "json"} : dispatch
Feb  1 10:14:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-64491543", "caps": ["mds", "allow rw path=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c19a0244-e063-4af0-8894-414616a3f2b3", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:14:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-64491543", "caps": ["mds", "allow rw path=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c19a0244-e063-4af0-8894-414616a3f2b3", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:14:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 28 KiB/s wr, 6 op/s
Feb  1 10:14:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Feb  1 10:14:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Feb  1 10:14:09 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "auth_id": "tempest-cephx-id-64491543", "format": "json"}]: dispatch
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-64491543, format:json, prefix:fs subvolume deauthorize, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb  1 10:14:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-64491543", "format": "json"} v 0)
Feb  1 10:14:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-64491543", "format": "json"} : dispatch
Feb  1 10:14:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-64491543"} v 0)
Feb  1 10:14:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-64491543"} : dispatch
Feb  1 10:14:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-64491543"}]': finished
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-64491543, format:json, prefix:fs subvolume deauthorize, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "auth_id": "tempest-cephx-id-64491543", "format": "json"}]: dispatch
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-64491543, format:json, prefix:fs subvolume evict, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-64491543, client_metadata.root=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea
Feb  1 10:14:09 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-64491543,client_metadata.root=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea],prefix=session evict} (starting...)
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-64491543, format:json, prefix:fs subvolume evict, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "format": "json"}]: dispatch
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c19a0244-e063-4af0-8894-414616a3f2b3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c19a0244-e063-4af0-8894-414616a3f2b3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c19a0244-e063-4af0-8894-414616a3f2b3' of type subvolume
Feb  1 10:14:09 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:09.395+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c19a0244-e063-4af0-8894-414616a3f2b3' of type subvolume
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3'' moved to trashcan
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f8d76a2-690b-4d7e-8c67-40b563fa4add_0c3f4871-3746-4345-899c-cde05e7ab29a", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f8d76a2-690b-4d7e-8c67-40b563fa4add_0c3f4871-3746-4345-899c-cde05e7ab29a, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f8d76a2-690b-4d7e-8c67-40b563fa4add_0c3f4871-3746-4345-899c-cde05e7ab29a, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f8d76a2-690b-4d7e-8c67-40b563fa4add", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f8d76a2-690b-4d7e-8c67-40b563fa4add, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb  1 10:14:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f8d76a2-690b-4d7e-8c67-40b563fa4add, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:10 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-64491543", "format": "json"} : dispatch
Feb  1 10:14:10 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-64491543"} : dispatch
Feb  1 10:14:10 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-64491543"}]': finished
Feb  1 10:14:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 21 KiB/s wr, 4 op/s
Feb  1 10:14:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:14:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 12 op/s
Feb  1 10:14:12 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:14:12.497 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  1 10:14:12 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:14:12.498 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  1 10:14:13 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "f1f3c043-c4de-4c8e-b742-6b2aba8a90bd_2401e4d9-9c2d-4644-befb-68ed41585c58", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1f3c043-c4de-4c8e-b742-6b2aba8a90bd_2401e4d9-9c2d-4644-befb-68ed41585c58, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb  1 10:14:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb  1 10:14:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1f3c043-c4de-4c8e-b742-6b2aba8a90bd_2401e4d9-9c2d-4644-befb-68ed41585c58, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:13 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "f1f3c043-c4de-4c8e-b742-6b2aba8a90bd", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1f3c043-c4de-4c8e-b742-6b2aba8a90bd, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb  1 10:14:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb  1 10:14:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1f3c043-c4de-4c8e-b742-6b2aba8a90bd, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 249 B/s rd, 53 KiB/s wr, 8 op/s
Feb  1 10:14:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Feb  1 10:14:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Feb  1 10:14:14 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Feb  1 10:14:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Feb  1 10:14:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Feb  1 10:14:15 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Feb  1 10:14:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:14:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Feb  1 10:14:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Feb  1 10:14:16 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Feb  1 10:14:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 96 KiB/s wr, 17 op/s
Feb  1 10:14:16 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "fbe6d350-ea63-4fce-8220-3c83f15d3afc_8622dc9d-aad8-45b9-8bf6-4ce20c111ec2", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fbe6d350-ea63-4fce-8220-3c83f15d3afc_8622dc9d-aad8-45b9-8bf6-4ce20c111ec2, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb  1 10:14:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb  1 10:14:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fbe6d350-ea63-4fce-8220-3c83f15d3afc_8622dc9d-aad8-45b9-8bf6-4ce20c111ec2, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:16 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "fbe6d350-ea63-4fce-8220-3c83f15d3afc", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fbe6d350-ea63-4fce-8220-3c83f15d3afc, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb  1 10:14:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb  1 10:14:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fbe6d350-ea63-4fce-8220-3c83f15d3afc, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:14:17
Feb  1 10:14:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:14:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:14:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'volumes', '.mgr', 'default.rgw.meta', 'images', 'backups', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Feb  1 10:14:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 25 KiB/s wr, 6 op/s
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa2fa960-5933-441d-ba7d-210a851e8867", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa2fa960-5933-441d-ba7d-210a851e8867, vol_name:cephfs) < ""
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/aa2fa960-5933-441d-ba7d-210a851e8867/102aa16c-b8d8-4a6c-80e3-ae8484f4e160'.
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa2fa960-5933-441d-ba7d-210a851e8867/.meta.tmp'
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa2fa960-5933-441d-ba7d-210a851e8867/.meta.tmp' to config b'/volumes/_nogroup/aa2fa960-5933-441d-ba7d-210a851e8867/.meta'
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa2fa960-5933-441d-ba7d-210a851e8867, vol_name:cephfs) < ""
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa2fa960-5933-441d-ba7d-210a851e8867", "format": "json"}]: dispatch
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa2fa960-5933-441d-ba7d-210a851e8867, vol_name:cephfs) < ""
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa2fa960-5933-441d-ba7d-210a851e8867, vol_name:cephfs) < ""
Feb  1 10:14:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:14:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:14:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:14:19 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:14:19.500 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  1 10:14:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Feb  1 10:14:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Feb  1 10:14:19 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Feb  1 10:14:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f853247-8fbf-41cc-a044-d26afb9421d6_6055dc83-b33d-4c1e-b4e8-46b0bf50f6e2", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f853247-8fbf-41cc-a044-d26afb9421d6_6055dc83-b33d-4c1e-b4e8-46b0bf50f6e2, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb  1 10:14:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb  1 10:14:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f853247-8fbf-41cc-a044-d26afb9421d6_6055dc83-b33d-4c1e-b4e8-46b0bf50f6e2, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f853247-8fbf-41cc-a044-d26afb9421d6", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f853247-8fbf-41cc-a044-d26afb9421d6, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb  1 10:14:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb  1 10:14:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f853247-8fbf-41cc-a044-d26afb9421d6, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 689 B/s rd, 25 KiB/s wr, 6 op/s
Feb  1 10:14:21 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "format": "json"}]: dispatch
Feb  1 10:14:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:21 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0d8696eb-14a5-4abf-b5f8-d5c0093d2c06' of type subvolume
Feb  1 10:14:21 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:21.094+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0d8696eb-14a5-4abf-b5f8-d5c0093d2c06' of type subvolume
Feb  1 10:14:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:14:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Feb  1 10:14:21 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Feb  1 10:14:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06'' moved to trashcan
Feb  1 10:14:21 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Feb  1 10:14:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:14:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb  1 10:14:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 330 B/s rd, 43 KiB/s wr, 7 op/s
Feb  1 10:14:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa2fa960-5933-441d-ba7d-210a851e8867", "format": "json"}]: dispatch
Feb  1 10:14:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:aa2fa960-5933-441d-ba7d-210a851e8867, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:aa2fa960-5933-441d-ba7d-210a851e8867, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:22 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa2fa960-5933-441d-ba7d-210a851e8867' of type subvolume
Feb  1 10:14:22 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:22.759+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa2fa960-5933-441d-ba7d-210a851e8867' of type subvolume
Feb  1 10:14:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa2fa960-5933-441d-ba7d-210a851e8867", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa2fa960-5933-441d-ba7d-210a851e8867, vol_name:cephfs) < ""
Feb  1 10:14:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/aa2fa960-5933-441d-ba7d-210a851e8867'' moved to trashcan
Feb  1 10:14:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:14:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa2fa960-5933-441d-ba7d-210a851e8867, vol_name:cephfs) < ""
Feb  1 10:14:23 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "14f2cf47-b452-4ed6-a42d-a978bd461803", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:14:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:14f2cf47-b452-4ed6-a42d-a978bd461803, vol_name:cephfs) < ""
Feb  1 10:14:23 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/14f2cf47-b452-4ed6-a42d-a978bd461803/63782069-23d1-48cb-bfe3-e74b20e4e487'.
Feb  1 10:14:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/14f2cf47-b452-4ed6-a42d-a978bd461803/.meta.tmp'
Feb  1 10:14:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/14f2cf47-b452-4ed6-a42d-a978bd461803/.meta.tmp' to config b'/volumes/_nogroup/14f2cf47-b452-4ed6-a42d-a978bd461803/.meta'
Feb  1 10:14:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:14f2cf47-b452-4ed6-a42d-a978bd461803, vol_name:cephfs) < ""
Feb  1 10:14:24 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "14f2cf47-b452-4ed6-a42d-a978bd461803", "format": "json"}]: dispatch
Feb  1 10:14:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:14f2cf47-b452-4ed6-a42d-a978bd461803, vol_name:cephfs) < ""
Feb  1 10:14:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:14f2cf47-b452-4ed6-a42d-a978bd461803, vol_name:cephfs) < ""
Feb  1 10:14:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:14:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:14:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 33 KiB/s wr, 5 op/s
Feb  1 10:14:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Feb  1 10:14:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Feb  1 10:14:24 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Feb  1 10:14:24 np0005604375 podman[244832]: 2026-02-01 15:14:24.977097269 +0000 UTC m=+0.065707094 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb  1 10:14:25 np0005604375 podman[244833]: 2026-02-01 15:14:25.008741106 +0000 UTC m=+0.094083240 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  1 10:14:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:14:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Feb  1 10:14:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Feb  1 10:14:26 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Feb  1 10:14:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 92 KiB/s wr, 14 op/s
Feb  1 10:14:27 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "62d0c62a-1088-49db-8483-cc680a52ec63", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:14:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:62d0c62a-1088-49db-8483-cc680a52ec63, vol_name:cephfs) < ""
Feb  1 10:14:27 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/62d0c62a-1088-49db-8483-cc680a52ec63/b3263236-0a75-46ae-ab59-83a44da59eb1'.
Feb  1 10:14:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/62d0c62a-1088-49db-8483-cc680a52ec63/.meta.tmp'
Feb  1 10:14:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/62d0c62a-1088-49db-8483-cc680a52ec63/.meta.tmp' to config b'/volumes/_nogroup/62d0c62a-1088-49db-8483-cc680a52ec63/.meta'
Feb  1 10:14:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:62d0c62a-1088-49db-8483-cc680a52ec63, vol_name:cephfs) < ""
Feb  1 10:14:27 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "62d0c62a-1088-49db-8483-cc680a52ec63", "format": "json"}]: dispatch
Feb  1 10:14:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:62d0c62a-1088-49db-8483-cc680a52ec63, vol_name:cephfs) < ""
Feb  1 10:14:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:62d0c62a-1088-49db-8483-cc680a52ec63, vol_name:cephfs) < ""
Feb  1 10:14:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:14:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659514365146859 of space, bias 1.0, pg target 0.19978543095440576 quantized to 32 (current 32)
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 3.815308953585969e-05 of space, bias 4.0, pg target 0.045783707443031625 quantized to 16 (current 16)
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:14:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 710 B/s rd, 41 KiB/s wr, 7 op/s
Feb  1 10:14:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 36 KiB/s wr, 5 op/s
Feb  1 10:14:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:14:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Feb  1 10:14:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Feb  1 10:14:31 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Feb  1 10:14:31 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "62d0c62a-1088-49db-8483-cc680a52ec63", "format": "json"}]: dispatch
Feb  1 10:14:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:62d0c62a-1088-49db-8483-cc680a52ec63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:62d0c62a-1088-49db-8483-cc680a52ec63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:32 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '62d0c62a-1088-49db-8483-cc680a52ec63' of type subvolume
Feb  1 10:14:32 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:32.001+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '62d0c62a-1088-49db-8483-cc680a52ec63' of type subvolume
Feb  1 10:14:32 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "62d0c62a-1088-49db-8483-cc680a52ec63", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:62d0c62a-1088-49db-8483-cc680a52ec63, vol_name:cephfs) < ""
Feb  1 10:14:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/62d0c62a-1088-49db-8483-cc680a52ec63'' moved to trashcan
Feb  1 10:14:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:14:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:62d0c62a-1088-49db-8483-cc680a52ec63, vol_name:cephfs) < ""
Feb  1 10:14:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 660 B/s rd, 50 KiB/s wr, 8 op/s
Feb  1 10:14:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 2 op/s
Feb  1 10:14:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "234ef068-f24e-4b9f-8f83-1f4a01701b53", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:14:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, vol_name:cephfs) < ""
Feb  1 10:14:35 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/234ef068-f24e-4b9f-8f83-1f4a01701b53/5f28e54f-2195-45a3-b703-e4eee7e9f6dd'.
Feb  1 10:14:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/234ef068-f24e-4b9f-8f83-1f4a01701b53/.meta.tmp'
Feb  1 10:14:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/234ef068-f24e-4b9f-8f83-1f4a01701b53/.meta.tmp' to config b'/volumes/_nogroup/234ef068-f24e-4b9f-8f83-1f4a01701b53/.meta'
Feb  1 10:14:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, vol_name:cephfs) < ""
Feb  1 10:14:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "234ef068-f24e-4b9f-8f83-1f4a01701b53", "format": "json"}]: dispatch
Feb  1 10:14:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, vol_name:cephfs) < ""
Feb  1 10:14:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, vol_name:cephfs) < ""
Feb  1 10:14:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:14:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:14:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:14:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Feb  1 10:14:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Feb  1 10:14:36 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Feb  1 10:14:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 4 op/s
Feb  1 10:14:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 4 op/s
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:14:39 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:14:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:39 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/86c82a78-ed68-479a-856d-a96ae3edab27'.
Feb  1 10:14:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp'
Feb  1 10:14:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp' to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta'
Feb  1 10:14:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:39 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "format": "json"}]: dispatch
Feb  1 10:14:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:14:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:14:39 np0005604375 podman[245020]: 2026-02-01 15:14:39.863540536 +0000 UTC m=+0.053386828 container create 3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sutherland, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:14:39 np0005604375 systemd[1]: Started libpod-conmon-3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e.scope.
Feb  1 10:14:39 np0005604375 podman[245020]: 2026-02-01 15:14:39.841996022 +0000 UTC m=+0.031842314 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:14:39 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:14:39 np0005604375 podman[245020]: 2026-02-01 15:14:39.961265677 +0000 UTC m=+0.151112029 container init 3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  1 10:14:39 np0005604375 podman[245020]: 2026-02-01 15:14:39.971521035 +0000 UTC m=+0.161367287 container start 3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sutherland, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  1 10:14:39 np0005604375 podman[245020]: 2026-02-01 15:14:39.975146287 +0000 UTC m=+0.164992629 container attach 3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:14:39 np0005604375 romantic_sutherland[245036]: 167 167
Feb  1 10:14:39 np0005604375 systemd[1]: libpod-3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e.scope: Deactivated successfully.
Feb  1 10:14:39 np0005604375 conmon[245036]: conmon 3b600fc83ec7d3275d4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e.scope/container/memory.events
Feb  1 10:14:39 np0005604375 podman[245020]: 2026-02-01 15:14:39.981138375 +0000 UTC m=+0.170984647 container died 3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sutherland, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:14:40 np0005604375 systemd[1]: var-lib-containers-storage-overlay-92bf5b88c44e5021338ba854617e4bfb761e947424d0d44f338a53c219021b39-merged.mount: Deactivated successfully.
Feb  1 10:14:40 np0005604375 podman[245020]: 2026-02-01 15:14:40.021771925 +0000 UTC m=+0.211618177 container remove 3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:14:40 np0005604375 systemd[1]: libpod-conmon-3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e.scope: Deactivated successfully.
Feb  1 10:14:40 np0005604375 podman[245060]: 2026-02-01 15:14:40.1731258 +0000 UTC m=+0.042415091 container create 6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_jemison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  1 10:14:40 np0005604375 systemd[1]: Started libpod-conmon-6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef.scope.
Feb  1 10:14:40 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:14:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0ffd746b7bd4ed8492cbd07590bd8fb3e581277fe83a20f3fbdbf2db326ea3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:14:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0ffd746b7bd4ed8492cbd07590bd8fb3e581277fe83a20f3fbdbf2db326ea3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:14:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0ffd746b7bd4ed8492cbd07590bd8fb3e581277fe83a20f3fbdbf2db326ea3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:14:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0ffd746b7bd4ed8492cbd07590bd8fb3e581277fe83a20f3fbdbf2db326ea3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:14:40 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0ffd746b7bd4ed8492cbd07590bd8fb3e581277fe83a20f3fbdbf2db326ea3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:14:40 np0005604375 podman[245060]: 2026-02-01 15:14:40.154533729 +0000 UTC m=+0.023823050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:14:40 np0005604375 podman[245060]: 2026-02-01 15:14:40.267356733 +0000 UTC m=+0.136646034 container init 6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:14:40 np0005604375 podman[245060]: 2026-02-01 15:14:40.27471165 +0000 UTC m=+0.144000941 container start 6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_jemison, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:14:40 np0005604375 podman[245060]: 2026-02-01 15:14:40.27827564 +0000 UTC m=+0.147565021 container attach 6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:14:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 223 B/s rd, 17 KiB/s wr, 2 op/s
Feb  1 10:14:40 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "234ef068-f24e-4b9f-8f83-1f4a01701b53", "format": "json"}]: dispatch
Feb  1 10:14:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:40 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '234ef068-f24e-4b9f-8f83-1f4a01701b53' of type subvolume
Feb  1 10:14:40 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:40.395+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '234ef068-f24e-4b9f-8f83-1f4a01701b53' of type subvolume
Feb  1 10:14:40 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "234ef068-f24e-4b9f-8f83-1f4a01701b53", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, vol_name:cephfs) < ""
Feb  1 10:14:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/234ef068-f24e-4b9f-8f83-1f4a01701b53'' moved to trashcan
Feb  1 10:14:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:14:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, vol_name:cephfs) < ""
Feb  1 10:14:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:14:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:14:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:14:40 np0005604375 upbeat_jemison[245077]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:14:40 np0005604375 upbeat_jemison[245077]: --> All data devices are unavailable
Feb  1 10:14:40 np0005604375 systemd[1]: libpod-6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef.scope: Deactivated successfully.
Feb  1 10:14:40 np0005604375 podman[245097]: 2026-02-01 15:14:40.780400473 +0000 UTC m=+0.037305357 container died 6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_jemison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  1 10:14:40 np0005604375 systemd[1]: var-lib-containers-storage-overlay-3b0ffd746b7bd4ed8492cbd07590bd8fb3e581277fe83a20f3fbdbf2db326ea3-merged.mount: Deactivated successfully.
Feb  1 10:14:40 np0005604375 podman[245097]: 2026-02-01 15:14:40.82875321 +0000 UTC m=+0.085658044 container remove 6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 10:14:40 np0005604375 systemd[1]: libpod-conmon-6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef.scope: Deactivated successfully.
Feb  1 10:14:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:14:41 np0005604375 podman[245173]: 2026-02-01 15:14:41.291715036 +0000 UTC m=+0.055438736 container create e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gagarin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:14:41 np0005604375 systemd[1]: Started libpod-conmon-e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b.scope.
Feb  1 10:14:41 np0005604375 podman[245173]: 2026-02-01 15:14:41.264843962 +0000 UTC m=+0.028567732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:14:41 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:14:41 np0005604375 podman[245173]: 2026-02-01 15:14:41.388912642 +0000 UTC m=+0.152636412 container init e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gagarin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:14:41 np0005604375 podman[245173]: 2026-02-01 15:14:41.393753608 +0000 UTC m=+0.157477338 container start e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gagarin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:14:41 np0005604375 blissful_gagarin[245189]: 167 167
Feb  1 10:14:41 np0005604375 systemd[1]: libpod-e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b.scope: Deactivated successfully.
Feb  1 10:14:41 np0005604375 podman[245173]: 2026-02-01 15:14:41.401779663 +0000 UTC m=+0.165503473 container attach e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gagarin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:14:41 np0005604375 podman[245173]: 2026-02-01 15:14:41.402178214 +0000 UTC m=+0.165901934 container died e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gagarin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  1 10:14:41 np0005604375 systemd[1]: var-lib-containers-storage-overlay-71ceee7451b548ea3e0750eced4d5937c55a1c1fb6499f75875edc7ea3b0cf3b-merged.mount: Deactivated successfully.
Feb  1 10:14:41 np0005604375 podman[245173]: 2026-02-01 15:14:41.4975607 +0000 UTC m=+0.261284420 container remove e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gagarin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:14:41 np0005604375 systemd[1]: libpod-conmon-e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b.scope: Deactivated successfully.
Feb  1 10:14:41 np0005604375 podman[245215]: 2026-02-01 15:14:41.683827035 +0000 UTC m=+0.056222358 container create f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_booth, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:14:41 np0005604375 systemd[1]: Started libpod-conmon-f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326.scope.
Feb  1 10:14:41 np0005604375 podman[245215]: 2026-02-01 15:14:41.658377691 +0000 UTC m=+0.030773094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:14:41 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:14:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eced9f8edc02ccaebb96b49eed91ba55bee829f66de04f5f90d127a43372f84c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:14:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eced9f8edc02ccaebb96b49eed91ba55bee829f66de04f5f90d127a43372f84c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:14:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eced9f8edc02ccaebb96b49eed91ba55bee829f66de04f5f90d127a43372f84c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:14:41 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eced9f8edc02ccaebb96b49eed91ba55bee829f66de04f5f90d127a43372f84c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:14:41 np0005604375 podman[245215]: 2026-02-01 15:14:41.789257292 +0000 UTC m=+0.161652635 container init f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_booth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  1 10:14:41 np0005604375 podman[245215]: 2026-02-01 15:14:41.795745254 +0000 UTC m=+0.168140577 container start f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:14:41 np0005604375 podman[245215]: 2026-02-01 15:14:41.817530515 +0000 UTC m=+0.189925868 container attach f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_booth, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:14:42 np0005604375 competent_booth[245232]: {
Feb  1 10:14:42 np0005604375 competent_booth[245232]:    "0": [
Feb  1 10:14:42 np0005604375 competent_booth[245232]:        {
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "devices": [
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "/dev/loop3"
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            ],
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_name": "ceph_lv0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_size": "21470642176",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "name": "ceph_lv0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "tags": {
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.cluster_name": "ceph",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.crush_device_class": "",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.encrypted": "0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.objectstore": "bluestore",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.osd_id": "0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.type": "block",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.vdo": "0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.with_tpm": "0"
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            },
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "type": "block",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "vg_name": "ceph_vg0"
Feb  1 10:14:42 np0005604375 competent_booth[245232]:        }
Feb  1 10:14:42 np0005604375 competent_booth[245232]:    ],
Feb  1 10:14:42 np0005604375 competent_booth[245232]:    "1": [
Feb  1 10:14:42 np0005604375 competent_booth[245232]:        {
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "devices": [
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "/dev/loop4"
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            ],
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_name": "ceph_lv1",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_size": "21470642176",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "name": "ceph_lv1",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "tags": {
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.cluster_name": "ceph",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.crush_device_class": "",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.encrypted": "0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.objectstore": "bluestore",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.osd_id": "1",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.type": "block",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.vdo": "0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.with_tpm": "0"
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            },
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "type": "block",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "vg_name": "ceph_vg1"
Feb  1 10:14:42 np0005604375 competent_booth[245232]:        }
Feb  1 10:14:42 np0005604375 competent_booth[245232]:    ],
Feb  1 10:14:42 np0005604375 competent_booth[245232]:    "2": [
Feb  1 10:14:42 np0005604375 competent_booth[245232]:        {
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "devices": [
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "/dev/loop5"
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            ],
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_name": "ceph_lv2",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_size": "21470642176",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "name": "ceph_lv2",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "tags": {
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.cluster_name": "ceph",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.crush_device_class": "",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.encrypted": "0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.objectstore": "bluestore",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.osd_id": "2",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.type": "block",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.vdo": "0",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:                "ceph.with_tpm": "0"
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            },
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "type": "block",
Feb  1 10:14:42 np0005604375 competent_booth[245232]:            "vg_name": "ceph_vg2"
Feb  1 10:14:42 np0005604375 competent_booth[245232]:        }
Feb  1 10:14:42 np0005604375 competent_booth[245232]:    ]
Feb  1 10:14:42 np0005604375 competent_booth[245232]: }
Feb  1 10:14:42 np0005604375 systemd[1]: libpod-f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326.scope: Deactivated successfully.
Feb  1 10:14:42 np0005604375 podman[245215]: 2026-02-01 15:14:42.072932199 +0000 UTC m=+0.445327522 container died f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:14:42 np0005604375 systemd[1]: var-lib-containers-storage-overlay-eced9f8edc02ccaebb96b49eed91ba55bee829f66de04f5f90d127a43372f84c-merged.mount: Deactivated successfully.
Feb  1 10:14:42 np0005604375 podman[245215]: 2026-02-01 15:14:42.11752356 +0000 UTC m=+0.489918873 container remove f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Feb  1 10:14:42 np0005604375 systemd[1]: libpod-conmon-f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326.scope: Deactivated successfully.
Feb  1 10:14:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 36 KiB/s wr, 4 op/s
Feb  1 10:14:42 np0005604375 podman[245315]: 2026-02-01 15:14:42.557014708 +0000 UTC m=+0.037108962 container create 19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:14:42 np0005604375 systemd[1]: Started libpod-conmon-19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7.scope.
Feb  1 10:14:42 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:14:42 np0005604375 podman[245315]: 2026-02-01 15:14:42.609710686 +0000 UTC m=+0.089804950 container init 19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jackson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:14:42 np0005604375 podman[245315]: 2026-02-01 15:14:42.615042536 +0000 UTC m=+0.095136780 container start 19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Feb  1 10:14:42 np0005604375 competent_jackson[245329]: 167 167
Feb  1 10:14:42 np0005604375 podman[245315]: 2026-02-01 15:14:42.618978556 +0000 UTC m=+0.099072800 container attach 19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:14:42 np0005604375 systemd[1]: libpod-19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7.scope: Deactivated successfully.
Feb  1 10:14:42 np0005604375 podman[245315]: 2026-02-01 15:14:42.619505221 +0000 UTC m=+0.099599455 container died 19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  1 10:14:42 np0005604375 podman[245315]: 2026-02-01 15:14:42.540570947 +0000 UTC m=+0.020665211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:14:42 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f6dd07e8552a2dcf11f08c7d8d7d5a4dace741f909a7f990bd3b58ca05316b3f-merged.mount: Deactivated successfully.
Feb  1 10:14:42 np0005604375 podman[245315]: 2026-02-01 15:14:42.655627474 +0000 UTC m=+0.135721728 container remove 19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  1 10:14:42 np0005604375 systemd[1]: libpod-conmon-19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7.scope: Deactivated successfully.
Feb  1 10:14:42 np0005604375 podman[245354]: 2026-02-01 15:14:42.807082112 +0000 UTC m=+0.056873486 container create 9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:14:42 np0005604375 systemd[1]: Started libpod-conmon-9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0.scope.
Feb  1 10:14:42 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:14:42 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ecf6dd1940c15f5623ba0ed091b5a15aabe1848cc45dcad542ef5d12960d7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:14:42 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ecf6dd1940c15f5623ba0ed091b5a15aabe1848cc45dcad542ef5d12960d7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:14:42 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ecf6dd1940c15f5623ba0ed091b5a15aabe1848cc45dcad542ef5d12960d7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:14:42 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ecf6dd1940c15f5623ba0ed091b5a15aabe1848cc45dcad542ef5d12960d7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:14:42 np0005604375 podman[245354]: 2026-02-01 15:14:42.783035418 +0000 UTC m=+0.032826912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:14:42 np0005604375 podman[245354]: 2026-02-01 15:14:42.893421064 +0000 UTC m=+0.143212528 container init 9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  1 10:14:42 np0005604375 podman[245354]: 2026-02-01 15:14:42.90610579 +0000 UTC m=+0.155897204 container start 9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:14:42 np0005604375 podman[245354]: 2026-02-01 15:14:42.910918505 +0000 UTC m=+0.160709919 container attach 9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:14:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "00151c93-f474-433b-9073-c4743a80f8a9", "format": "json"}]: dispatch
Feb  1 10:14:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:00151c93-f474-433b-9073-c4743a80f8a9, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:00151c93-f474-433b-9073-c4743a80f8a9, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:43 np0005604375 lvm[245448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:14:43 np0005604375 lvm[245450]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:14:43 np0005604375 lvm[245450]: VG ceph_vg1 finished
Feb  1 10:14:43 np0005604375 lvm[245448]: VG ceph_vg0 finished
Feb  1 10:14:43 np0005604375 lvm[245452]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:14:43 np0005604375 lvm[245452]: VG ceph_vg2 finished
Feb  1 10:14:43 np0005604375 agitated_joliot[245371]: {}
Feb  1 10:14:43 np0005604375 systemd[1]: libpod-9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0.scope: Deactivated successfully.
Feb  1 10:14:43 np0005604375 systemd[1]: libpod-9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0.scope: Consumed 1.053s CPU time.
Feb  1 10:14:43 np0005604375 podman[245354]: 2026-02-01 15:14:43.642099535 +0000 UTC m=+0.891890909 container died 9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:14:43 np0005604375 systemd[1]: var-lib-containers-storage-overlay-e7ecf6dd1940c15f5623ba0ed091b5a15aabe1848cc45dcad542ef5d12960d7f-merged.mount: Deactivated successfully.
Feb  1 10:14:43 np0005604375 podman[245354]: 2026-02-01 15:14:43.674586956 +0000 UTC m=+0.924378330 container remove 9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  1 10:14:43 np0005604375 systemd[1]: libpod-conmon-9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0.scope: Deactivated successfully.
Feb  1 10:14:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:14:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:14:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:14:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:14:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "986a6f65-322a-44eb-81bc-bb6e9d6f221a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:14:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, vol_name:cephfs) < ""
Feb  1 10:14:43 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/986a6f65-322a-44eb-81bc-bb6e9d6f221a/8a0117e1-bdfc-47e2-9388-b50bb03f2da5'.
Feb  1 10:14:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/986a6f65-322a-44eb-81bc-bb6e9d6f221a/.meta.tmp'
Feb  1 10:14:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/986a6f65-322a-44eb-81bc-bb6e9d6f221a/.meta.tmp' to config b'/volumes/_nogroup/986a6f65-322a-44eb-81bc-bb6e9d6f221a/.meta'
Feb  1 10:14:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, vol_name:cephfs) < ""
Feb  1 10:14:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "986a6f65-322a-44eb-81bc-bb6e9d6f221a", "format": "json"}]: dispatch
Feb  1 10:14:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, vol_name:cephfs) < ""
Feb  1 10:14:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, vol_name:cephfs) < ""
Feb  1 10:14:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:14:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:14:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 36 KiB/s wr, 4 op/s
Feb  1 10:14:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:14:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:14:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:14:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Feb  1 10:14:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Feb  1 10:14:46 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Feb  1 10:14:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 37 KiB/s wr, 5 op/s
Feb  1 10:14:46 np0005604375 nova_compute[238794]: 2026-02-01 15:14:46.599 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:14:46 np0005604375 nova_compute[238794]: 2026-02-01 15:14:46.600 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:14:46 np0005604375 nova_compute[238794]: 2026-02-01 15:14:46.601 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:14:46 np0005604375 nova_compute[238794]: 2026-02-01 15:14:46.601 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:14:46 np0005604375 nova_compute[238794]: 2026-02-01 15:14:46.618 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:14:47 np0005604375 nova_compute[238794]: 2026-02-01 15:14:47.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:14:47 np0005604375 nova_compute[238794]: 2026-02-01 15:14:47.340 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a0be7a54-7b29-45e1-9605-eb7321d359f2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, vol_name:cephfs) < ""
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a0be7a54-7b29-45e1-9605-eb7321d359f2/bf7a0fa6-935f-438c-a9e0-4f04fe55824e'.
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a0be7a54-7b29-45e1-9605-eb7321d359f2/.meta.tmp'
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a0be7a54-7b29-45e1-9605-eb7321d359f2/.meta.tmp' to config b'/volumes/_nogroup/a0be7a54-7b29-45e1-9605-eb7321d359f2/.meta'
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, vol_name:cephfs) < ""
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a0be7a54-7b29-45e1-9605-eb7321d359f2", "format": "json"}]: dispatch
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, vol_name:cephfs) < ""
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, vol_name:cephfs) < ""
Feb  1 10:14:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:14:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "986a6f65-322a-44eb-81bc-bb6e9d6f221a", "format": "json"}]: dispatch
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '986a6f65-322a-44eb-81bc-bb6e9d6f221a' of type subvolume
Feb  1 10:14:47 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:47.716+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '986a6f65-322a-44eb-81bc-bb6e9d6f221a' of type subvolume
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "986a6f65-322a-44eb-81bc-bb6e9d6f221a", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, vol_name:cephfs) < ""
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/986a6f65-322a-44eb-81bc-bb6e9d6f221a'' moved to trashcan
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, vol_name:cephfs) < ""
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "e2132102-39e4-41f4-a6d3-e7a2a8df27cc", "format": "json"}]: dispatch
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e2132102-39e4-41f4-a6d3-e7a2a8df27cc, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e2132102-39e4-41f4-a6d3-e7a2a8df27cc, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 37 KiB/s wr, 5 op/s
Feb  1 10:14:48 np0005604375 nova_compute[238794]: 2026-02-01 15:14:48.318 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:14:48 np0005604375 nova_compute[238794]: 2026-02-01 15:14:48.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:14:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:14:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:14:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:14:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:14:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:14:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.808917) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958889808949, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2290, "num_deletes": 257, "total_data_size": 3555922, "memory_usage": 3605792, "flush_reason": "Manual Compaction"}
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958889819067, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3484761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16344, "largest_seqno": 18633, "table_properties": {"data_size": 3474162, "index_size": 6709, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23468, "raw_average_key_size": 21, "raw_value_size": 3452205, "raw_average_value_size": 3101, "num_data_blocks": 298, "num_entries": 1113, "num_filter_entries": 1113, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958707, "oldest_key_time": 1769958707, "file_creation_time": 1769958889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 10183 microseconds, and 4788 cpu microseconds.
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.819105) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3484761 bytes OK
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.819120) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.820529) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.820544) EVENT_LOG_v1 {"time_micros": 1769958889820540, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.820560) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3545923, prev total WAL file size 3545923, number of live WAL files 2.
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.821112) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3403KB)], [38(7673KB)]
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958889821227, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11341977, "oldest_snapshot_seqno": -1}
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4600 keys, 9556428 bytes, temperature: kUnknown
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958889879463, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9556428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9521941, "index_size": 21897, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11525, "raw_key_size": 111613, "raw_average_key_size": 24, "raw_value_size": 9435259, "raw_average_value_size": 2051, "num_data_blocks": 926, "num_entries": 4600, "num_filter_entries": 4600, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.879707) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9556428 bytes
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.883283) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.5 rd, 163.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5130, records dropped: 530 output_compression: NoCompression
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.883321) EVENT_LOG_v1 {"time_micros": 1769958889883310, "job": 18, "event": "compaction_finished", "compaction_time_micros": 58302, "compaction_time_cpu_micros": 27529, "output_level": 6, "num_output_files": 1, "total_output_size": 9556428, "num_input_records": 5130, "num_output_records": 4600, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958889883805, "job": 18, "event": "table_file_deletion", "file_number": 40}
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958889884842, "job": 18, "event": "table_file_deletion", "file_number": 38}
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.820951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.884887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.884891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.884893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.884894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:14:49 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.884896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:14:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 37 KiB/s wr, 5 op/s
Feb  1 10:14:50 np0005604375 nova_compute[238794]: 2026-02-01 15:14:50.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:14:50 np0005604375 nova_compute[238794]: 2026-02-01 15:14:50.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:14:50 np0005604375 nova_compute[238794]: 2026-02-01 15:14:50.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:14:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:14:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3642991910' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:14:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:14:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3642991910' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:14:51 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bc96901b-a655-4999-93d2-e6667ec9f6a9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:14:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, vol_name:cephfs) < ""
Feb  1 10:14:51 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bc96901b-a655-4999-93d2-e6667ec9f6a9/c7ded93a-afbe-41b1-ad33-9bd7a71748e6'.
Feb  1 10:14:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bc96901b-a655-4999-93d2-e6667ec9f6a9/.meta.tmp'
Feb  1 10:14:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bc96901b-a655-4999-93d2-e6667ec9f6a9/.meta.tmp' to config b'/volumes/_nogroup/bc96901b-a655-4999-93d2-e6667ec9f6a9/.meta'
Feb  1 10:14:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, vol_name:cephfs) < ""
Feb  1 10:14:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:14:51 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bc96901b-a655-4999-93d2-e6667ec9f6a9", "format": "json"}]: dispatch
Feb  1 10:14:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, vol_name:cephfs) < ""
Feb  1 10:14:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, vol_name:cephfs) < ""
Feb  1 10:14:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:14:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:14:51 np0005604375 nova_compute[238794]: 2026-02-01 15:14:51.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:14:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 35 KiB/s wr, 5 op/s
Feb  1 10:14:52 np0005604375 nova_compute[238794]: 2026-02-01 15:14:52.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:14:52 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "e2132102-39e4-41f4-a6d3-e7a2a8df27cc_1ed659b6-e30b-4f53-ae01-83823d19486c", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2132102-39e4-41f4-a6d3-e7a2a8df27cc_1ed659b6-e30b-4f53-ae01-83823d19486c, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp'
Feb  1 10:14:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp' to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta'
Feb  1 10:14:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2132102-39e4-41f4-a6d3-e7a2a8df27cc_1ed659b6-e30b-4f53-ae01-83823d19486c, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:52 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "e2132102-39e4-41f4-a6d3-e7a2a8df27cc", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2132102-39e4-41f4-a6d3-e7a2a8df27cc, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp'
Feb  1 10:14:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp' to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta'
Feb  1 10:14:52 np0005604375 nova_compute[238794]: 2026-02-01 15:14:52.350 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:14:52 np0005604375 nova_compute[238794]: 2026-02-01 15:14:52.351 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:14:52 np0005604375 nova_compute[238794]: 2026-02-01 15:14:52.351 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:14:52 np0005604375 nova_compute[238794]: 2026-02-01 15:14:52.351 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:14:52 np0005604375 nova_compute[238794]: 2026-02-01 15:14:52.351 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:14:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2132102-39e4-41f4-a6d3-e7a2a8df27cc, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:14:52 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190860762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:14:52 np0005604375 nova_compute[238794]: 2026-02-01 15:14:52.830 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:14:52 np0005604375 nova_compute[238794]: 2026-02-01 15:14:52.973 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:14:52 np0005604375 nova_compute[238794]: 2026-02-01 15:14:52.974 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5101MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:14:52 np0005604375 nova_compute[238794]: 2026-02-01 15:14:52.974 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:14:52 np0005604375 nova_compute[238794]: 2026-02-01 15:14:52.974 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:14:53 np0005604375 nova_compute[238794]: 2026-02-01 15:14:53.046 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:14:53 np0005604375 nova_compute[238794]: 2026-02-01 15:14:53.046 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:14:53 np0005604375 nova_compute[238794]: 2026-02-01 15:14:53.067 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:14:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:14:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/210678953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:14:53 np0005604375 nova_compute[238794]: 2026-02-01 15:14:53.531 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:14:53 np0005604375 nova_compute[238794]: 2026-02-01 15:14:53.535 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:14:53 np0005604375 nova_compute[238794]: 2026-02-01 15:14:53.555 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:14:53 np0005604375 nova_compute[238794]: 2026-02-01 15:14:53.557 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:14:53 np0005604375 nova_compute[238794]: 2026-02-01 15:14:53.558 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:14:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 35 KiB/s wr, 5 op/s
Feb  1 10:14:55 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eb102253-6a1f-49e8-ab97-331a8e4964d4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:14:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, vol_name:cephfs) < ""
Feb  1 10:14:55 np0005604375 podman[245535]: 2026-02-01 15:14:55.96887009 +0000 UTC m=+0.060272802 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  1 10:14:55 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/eb102253-6a1f-49e8-ab97-331a8e4964d4/343d7908-3f6a-4ee6-ae99-98e6f37f0d79'.
Feb  1 10:14:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eb102253-6a1f-49e8-ab97-331a8e4964d4/.meta.tmp'
Feb  1 10:14:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eb102253-6a1f-49e8-ab97-331a8e4964d4/.meta.tmp' to config b'/volumes/_nogroup/eb102253-6a1f-49e8-ab97-331a8e4964d4/.meta'
Feb  1 10:14:55 np0005604375 podman[245536]: 2026-02-01 15:14:55.991906446 +0000 UTC m=+0.083530934 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  1 10:14:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, vol_name:cephfs) < ""
Feb  1 10:14:55 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eb102253-6a1f-49e8-ab97-331a8e4964d4", "format": "json"}]: dispatch
Feb  1 10:14:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, vol_name:cephfs) < ""
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, vol_name:cephfs) < ""
Feb  1 10:14:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:14:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "00151c93-f474-433b-9073-c4743a80f8a9_7452b405-63e8-464b-8fbd-4019869a8486", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:00151c93-f474-433b-9073-c4743a80f8a9_7452b405-63e8-464b-8fbd-4019869a8486, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp'
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp' to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta'
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:00151c93-f474-433b-9073-c4743a80f8a9_7452b405-63e8-464b-8fbd-4019869a8486, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "00151c93-f474-433b-9073-c4743a80f8a9", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:00151c93-f474-433b-9073-c4743a80f8a9, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp'
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp' to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta'
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:00151c93-f474-433b-9073-c4743a80f8a9, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 402 B/s rd, 46 KiB/s wr, 6 op/s
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bc96901b-a655-4999-93d2-e6667ec9f6a9", "format": "json"}]: dispatch
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bc96901b-a655-4999-93d2-e6667ec9f6a9' of type subvolume
Feb  1 10:14:56 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:56.415+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bc96901b-a655-4999-93d2-e6667ec9f6a9' of type subvolume
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bc96901b-a655-4999-93d2-e6667ec9f6a9", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, vol_name:cephfs) < ""
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bc96901b-a655-4999-93d2-e6667ec9f6a9'' moved to trashcan
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:14:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, vol_name:cephfs) < ""
Feb  1 10:14:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 39 KiB/s wr, 5 op/s
Feb  1 10:14:59 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "format": "json"}]: dispatch
Feb  1 10:14:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:14:59 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '02a9afaa-78ab-4c60-9b65-efddd9ffb5df' of type subvolume
Feb  1 10:14:59 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:59.376+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '02a9afaa-78ab-4c60-9b65-efddd9ffb5df' of type subvolume
Feb  1 10:14:59 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "force": true, "format": "json"}]: dispatch
Feb  1 10:14:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:14:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df'' moved to trashcan
Feb  1 10:14:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:14:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb  1 10:15:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Feb  1 10:15:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Feb  1 10:15:00 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Feb  1 10:15:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 47 KiB/s wr, 6 op/s
Feb  1 10:15:00 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eb102253-6a1f-49e8-ab97-331a8e4964d4", "format": "json"}]: dispatch
Feb  1 10:15:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:00 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eb102253-6a1f-49e8-ab97-331a8e4964d4' of type subvolume
Feb  1 10:15:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:00.822+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eb102253-6a1f-49e8-ab97-331a8e4964d4' of type subvolume
Feb  1 10:15:00 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eb102253-6a1f-49e8-ab97-331a8e4964d4", "force": true, "format": "json"}]: dispatch
Feb  1 10:15:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, vol_name:cephfs) < ""
Feb  1 10:15:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/eb102253-6a1f-49e8-ab97-331a8e4964d4'' moved to trashcan
Feb  1 10:15:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:15:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, vol_name:cephfs) < ""
Feb  1 10:15:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:15:01 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ef425026-5828-4f43-8ed3-bad0eb8046b9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:15:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, vol_name:cephfs) < ""
Feb  1 10:15:01 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ef425026-5828-4f43-8ed3-bad0eb8046b9/a4ef6e04-7742-4e57-b0b4-6785fe4b593f'.
Feb  1 10:15:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ef425026-5828-4f43-8ed3-bad0eb8046b9/.meta.tmp'
Feb  1 10:15:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ef425026-5828-4f43-8ed3-bad0eb8046b9/.meta.tmp' to config b'/volumes/_nogroup/ef425026-5828-4f43-8ed3-bad0eb8046b9/.meta'
Feb  1 10:15:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, vol_name:cephfs) < ""
Feb  1 10:15:01 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ef425026-5828-4f43-8ed3-bad0eb8046b9", "format": "json"}]: dispatch
Feb  1 10:15:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, vol_name:cephfs) < ""
Feb  1 10:15:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, vol_name:cephfs) < ""
Feb  1 10:15:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:15:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:15:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 62 KiB/s wr, 8 op/s
Feb  1 10:15:03 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a0be7a54-7b29-45e1-9605-eb7321d359f2", "format": "json"}]: dispatch
Feb  1 10:15:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:03 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:03.104+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a0be7a54-7b29-45e1-9605-eb7321d359f2' of type subvolume
Feb  1 10:15:03 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a0be7a54-7b29-45e1-9605-eb7321d359f2' of type subvolume
Feb  1 10:15:03 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a0be7a54-7b29-45e1-9605-eb7321d359f2", "force": true, "format": "json"}]: dispatch
Feb  1 10:15:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, vol_name:cephfs) < ""
Feb  1 10:15:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a0be7a54-7b29-45e1-9605-eb7321d359f2'' moved to trashcan
Feb  1 10:15:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:15:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, vol_name:cephfs) < ""
Feb  1 10:15:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 62 KiB/s wr, 8 op/s
Feb  1 10:15:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:15:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Feb  1 10:15:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Feb  1 10:15:06 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Feb  1 10:15:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 90 KiB/s wr, 11 op/s
Feb  1 10:15:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ef425026-5828-4f43-8ed3-bad0eb8046b9", "format": "json"}]: dispatch
Feb  1 10:15:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:07 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ef425026-5828-4f43-8ed3-bad0eb8046b9' of type subvolume
Feb  1 10:15:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:07.506+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ef425026-5828-4f43-8ed3-bad0eb8046b9' of type subvolume
Feb  1 10:15:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ef425026-5828-4f43-8ed3-bad0eb8046b9", "force": true, "format": "json"}]: dispatch
Feb  1 10:15:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, vol_name:cephfs) < ""
Feb  1 10:15:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ef425026-5828-4f43-8ed3-bad0eb8046b9'' moved to trashcan
Feb  1 10:15:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:15:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, vol_name:cephfs) < ""
Feb  1 10:15:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:15:07.810 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:15:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:15:07.811 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:15:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:15:07.811 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:15:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 87 KiB/s wr, 11 op/s
Feb  1 10:15:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:15:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb  1 10:15:09 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/d1d2aa30-3fda-423c-98e7-19123ab0f35e'.
Feb  1 10:15:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta.tmp'
Feb  1 10:15:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta.tmp' to config b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta'
Feb  1 10:15:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb  1 10:15:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "format": "json"}]: dispatch
Feb  1 10:15:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb  1 10:15:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb  1 10:15:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:15:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:15:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 72 KiB/s wr, 9 op/s
Feb  1 10:15:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:15:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 45 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 58 KiB/s wr, 7 op/s
Feb  1 10:15:12 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:15:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, vol_name:cephfs) < ""
Feb  1 10:15:12 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4/a1249ef8-0fd1-4988-8b76-452c96f79331'.
Feb  1 10:15:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4/.meta.tmp'
Feb  1 10:15:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4/.meta.tmp' to config b'/volumes/_nogroup/5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4/.meta'
Feb  1 10:15:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, vol_name:cephfs) < ""
Feb  1 10:15:12 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4", "format": "json"}]: dispatch
Feb  1 10:15:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, vol_name:cephfs) < ""
Feb  1 10:15:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, vol_name:cephfs) < ""
Feb  1 10:15:12 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:15:12 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:15:13 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "snap_name": "bac99b88-a326-4fdb-ac75-b388970b7d3b", "format": "json"}]: dispatch
Feb  1 10:15:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bac99b88-a326-4fdb-ac75-b388970b7d3b, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb  1 10:15:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bac99b88-a326-4fdb-ac75-b388970b7d3b, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb  1 10:15:13 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "14f2cf47-b452-4ed6-a42d-a978bd461803", "format": "json"}]: dispatch
Feb  1 10:15:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:14f2cf47-b452-4ed6-a42d-a978bd461803, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:14f2cf47-b452-4ed6-a42d-a978bd461803, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:13 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '14f2cf47-b452-4ed6-a42d-a978bd461803' of type subvolume
Feb  1 10:15:13 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:13.537+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '14f2cf47-b452-4ed6-a42d-a978bd461803' of type subvolume
Feb  1 10:15:13 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "14f2cf47-b452-4ed6-a42d-a978bd461803", "force": true, "format": "json"}]: dispatch
Feb  1 10:15:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:14f2cf47-b452-4ed6-a42d-a978bd461803, vol_name:cephfs) < ""
Feb  1 10:15:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/14f2cf47-b452-4ed6-a42d-a978bd461803'' moved to trashcan
Feb  1 10:15:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:15:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:14f2cf47-b452-4ed6-a42d-a978bd461803, vol_name:cephfs) < ""
Feb  1 10:15:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 45 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 58 KiB/s wr, 7 op/s
Feb  1 10:15:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:15:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 402 B/s rd, 44 KiB/s wr, 5 op/s
Feb  1 10:15:16 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "876b1428-1377-472c-b335-dfa9653f4509", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:15:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:876b1428-1377-472c-b335-dfa9653f4509, vol_name:cephfs) < ""
Feb  1 10:15:16 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/876b1428-1377-472c-b335-dfa9653f4509/cd46af17-607e-4852-a03f-51369d24dcbc'.
Feb  1 10:15:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/876b1428-1377-472c-b335-dfa9653f4509/.meta.tmp'
Feb  1 10:15:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/876b1428-1377-472c-b335-dfa9653f4509/.meta.tmp' to config b'/volumes/_nogroup/876b1428-1377-472c-b335-dfa9653f4509/.meta'
Feb  1 10:15:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:876b1428-1377-472c-b335-dfa9653f4509, vol_name:cephfs) < ""
Feb  1 10:15:16 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "876b1428-1377-472c-b335-dfa9653f4509", "format": "json"}]: dispatch
Feb  1 10:15:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:876b1428-1377-472c-b335-dfa9653f4509, vol_name:cephfs) < ""
Feb  1 10:15:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:876b1428-1377-472c-b335-dfa9653f4509, vol_name:cephfs) < ""
Feb  1 10:15:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:15:16 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "snap_name": "bac99b88-a326-4fdb-ac75-b388970b7d3b_9f047d51-0b94-405e-b75c-b64696ffced9", "force": true, "format": "json"}]: dispatch
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bac99b88-a326-4fdb-ac75-b388970b7d3b_9f047d51-0b94-405e-b75c-b64696ffced9, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta.tmp'
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta.tmp' to config b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta'
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bac99b88-a326-4fdb-ac75-b388970b7d3b_9f047d51-0b94-405e-b75c-b64696ffced9, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "snap_name": "bac99b88-a326-4fdb-ac75-b388970b7d3b", "force": true, "format": "json"}]: dispatch
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bac99b88-a326-4fdb-ac75-b388970b7d3b, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta.tmp'
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta.tmp' to config b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta'
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bac99b88-a326-4fdb-ac75-b388970b7d3b, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:15:17
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta']
Feb  1 10:15:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:15:18 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:15:18.228 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  1 10:15:18 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:15:18.230 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 37 KiB/s wr, 4 op/s
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:15:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:15:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "876b1428-1377-472c-b335-dfa9653f4509", "format": "json"}]: dispatch
Feb  1 10:15:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:876b1428-1377-472c-b335-dfa9653f4509, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:876b1428-1377-472c-b335-dfa9653f4509, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:20 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '876b1428-1377-472c-b335-dfa9653f4509' of type subvolume
Feb  1 10:15:20 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:20.303+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '876b1428-1377-472c-b335-dfa9653f4509' of type subvolume
Feb  1 10:15:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "876b1428-1377-472c-b335-dfa9653f4509", "force": true, "format": "json"}]: dispatch
Feb  1 10:15:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:876b1428-1377-472c-b335-dfa9653f4509, vol_name:cephfs) < ""
Feb  1 10:15:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/876b1428-1377-472c-b335-dfa9653f4509'' moved to trashcan
Feb  1 10:15:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:15:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:876b1428-1377-472c-b335-dfa9653f4509, vol_name:cephfs) < ""
Feb  1 10:15:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 37 KiB/s wr, 4 op/s
Feb  1 10:15:21 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "format": "json"}]: dispatch
Feb  1 10:15:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:21 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:21.109+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7371576d-9b9d-4a2b-b2a0-dbf1c35daed8' of type subvolume
Feb  1 10:15:21 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7371576d-9b9d-4a2b-b2a0-dbf1c35daed8' of type subvolume
Feb  1 10:15:21 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "force": true, "format": "json"}]: dispatch
Feb  1 10:15:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb  1 10:15:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8'' moved to trashcan
Feb  1 10:15:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:15:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb  1 10:15:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:15:22 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:15:22.232 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  1 10:15:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 59 KiB/s wr, 8 op/s
Feb  1 10:15:23 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4", "format": "json"}]: dispatch
Feb  1 10:15:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:23 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:23.904+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4' of type subvolume
Feb  1 10:15:23 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4' of type subvolume
Feb  1 10:15:23 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4", "force": true, "format": "json"}]: dispatch
Feb  1 10:15:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, vol_name:cephfs) < ""
Feb  1 10:15:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4'' moved to trashcan
Feb  1 10:15:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:15:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, vol_name:cephfs) < ""
Feb  1 10:15:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 41 KiB/s wr, 5 op/s
Feb  1 10:15:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Feb  1 10:15:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Feb  1 10:15:25 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Feb  1 10:15:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:15:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 7 op/s
Feb  1 10:15:26 np0005604375 podman[245580]: 2026-02-01 15:15:26.980929145 +0000 UTC m=+0.065593871 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  1 10:15:27 np0005604375 podman[245581]: 2026-02-01 15:15:27.026396971 +0000 UTC m=+0.106602712 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller)
Feb  1 10:15:27 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f230efa5-4f47-4fa4-820a-fbfacc27744c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:15:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, vol_name:cephfs) < ""
Feb  1 10:15:27 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f230efa5-4f47-4fa4-820a-fbfacc27744c/3d7a4d56-3b54-44b4-b837-eb1b19c5bef9'.
Feb  1 10:15:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f230efa5-4f47-4fa4-820a-fbfacc27744c/.meta.tmp'
Feb  1 10:15:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f230efa5-4f47-4fa4-820a-fbfacc27744c/.meta.tmp' to config b'/volumes/_nogroup/f230efa5-4f47-4fa4-820a-fbfacc27744c/.meta'
Feb  1 10:15:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, vol_name:cephfs) < ""
Feb  1 10:15:27 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f230efa5-4f47-4fa4-820a-fbfacc27744c", "format": "json"}]: dispatch
Feb  1 10:15:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, vol_name:cephfs) < ""
Feb  1 10:15:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, vol_name:cephfs) < ""
Feb  1 10:15:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:15:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659521351430677 of space, bias 1.0, pg target 0.1997856405429203 quantized to 32 (current 32)
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.898622621989711e-05 of space, bias 4.0, pg target 0.09478347146387653 quantized to 16 (current 16)
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:15:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 7 op/s
Feb  1 10:15:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 7 op/s
Feb  1 10:15:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:15:31 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f230efa5-4f47-4fa4-820a-fbfacc27744c", "format": "json"}]: dispatch
Feb  1 10:15:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:31 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:31.779+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f230efa5-4f47-4fa4-820a-fbfacc27744c' of type subvolume
Feb  1 10:15:31 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f230efa5-4f47-4fa4-820a-fbfacc27744c' of type subvolume
Feb  1 10:15:31 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f230efa5-4f47-4fa4-820a-fbfacc27744c", "force": true, "format": "json"}]: dispatch
Feb  1 10:15:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, vol_name:cephfs) < ""
Feb  1 10:15:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f230efa5-4f47-4fa4-820a-fbfacc27744c'' moved to trashcan
Feb  1 10:15:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:15:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, vol_name:cephfs) < ""
Feb  1 10:15:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 35 KiB/s wr, 5 op/s
Feb  1 10:15:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 35 KiB/s wr, 5 op/s
Feb  1 10:15:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:15:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb  1 10:15:35 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23'.
Feb  1 10:15:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/.meta.tmp'
Feb  1 10:15:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/.meta.tmp' to config b'/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/.meta'
Feb  1 10:15:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb  1 10:15:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "format": "json"}]: dispatch
Feb  1 10:15:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb  1 10:15:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb  1 10:15:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:15:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:15:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:15:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Feb  1 10:15:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Feb  1 10:15:36 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Feb  1 10:15:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 33 KiB/s wr, 4 op/s
Feb  1 10:15:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 33 KiB/s wr, 4 op/s
Feb  1 10:15:39 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:15:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb  1 10:15:39 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a'.
Feb  1 10:15:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/.meta.tmp'
Feb  1 10:15:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/.meta.tmp' to config b'/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/.meta'
Feb  1 10:15:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb  1 10:15:39 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "format": "json"}]: dispatch
Feb  1 10:15:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb  1 10:15:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb  1 10:15:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:15:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:15:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 33 KiB/s wr, 4 op/s
Feb  1 10:15:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:15:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 34 KiB/s wr, 4 op/s
Feb  1 10:15:42 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "auth_id": "Joe", "tenant_id": "e483891a9fd042d4a571a3d4655dc685", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:15:42 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, tenant_id:e483891a9fd042d4a571a3d4655dc685, vol_name:cephfs) < ""
Feb  1 10:15:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Feb  1 10:15:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb  1 10:15:42 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID Joe with tenant e483891a9fd042d4a571a3d4655dc685
Feb  1 10:15:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:15:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:15:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:15:42 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, tenant_id:e483891a9fd042d4a571a3d4655dc685, vol_name:cephfs) < ""
Feb  1 10:15:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:15:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:15:43 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb'.
Feb  1 10:15:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/.meta.tmp'
Feb  1 10:15:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/.meta.tmp' to config b'/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/.meta'
Feb  1 10:15:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:15:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "format": "json"}]: dispatch
Feb  1 10:15:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:15:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:15:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:15:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:15:43 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb  1 10:15:43 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:15:43 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:15:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 34 KiB/s wr, 4 op/s
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:15:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:15:44 np0005604375 podman[245768]: 2026-02-01 15:15:44.673664687 +0000 UTC m=+0.046987073 container create 59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Feb  1 10:15:44 np0005604375 systemd[1]: Started libpod-conmon-59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3.scope.
Feb  1 10:15:44 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:15:44 np0005604375 podman[245768]: 2026-02-01 15:15:44.719707952 +0000 UTC m=+0.093030358 container init 59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackburn, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:15:44 np0005604375 podman[245768]: 2026-02-01 15:15:44.724983251 +0000 UTC m=+0.098305637 container start 59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackburn, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:15:44 np0005604375 frosty_blackburn[245783]: 167 167
Feb  1 10:15:44 np0005604375 systemd[1]: libpod-59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3.scope: Deactivated successfully.
Feb  1 10:15:44 np0005604375 podman[245768]: 2026-02-01 15:15:44.72816744 +0000 UTC m=+0.101489816 container attach 59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackburn, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 10:15:44 np0005604375 podman[245768]: 2026-02-01 15:15:44.729015504 +0000 UTC m=+0.102337890 container died 59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackburn, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:15:44 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a0ca148d47fbfda43051201434aed814ae73902d0732fc14c9c56803f3bfe286-merged.mount: Deactivated successfully.
Feb  1 10:15:44 np0005604375 podman[245768]: 2026-02-01 15:15:44.655787834 +0000 UTC m=+0.029110240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:15:44 np0005604375 podman[245768]: 2026-02-01 15:15:44.763126424 +0000 UTC m=+0.136448810 container remove 59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:15:44 np0005604375 systemd[1]: libpod-conmon-59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3.scope: Deactivated successfully.
Feb  1 10:15:44 np0005604375 podman[245806]: 2026-02-01 15:15:44.88353298 +0000 UTC m=+0.039569944 container create 774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rosalind, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:15:44 np0005604375 systemd[1]: Started libpod-conmon-774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3.scope.
Feb  1 10:15:44 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:15:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b9d1bbe62e7bbab3b33fb72341e5485cac429f7575fed967e1b597039c617d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:15:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b9d1bbe62e7bbab3b33fb72341e5485cac429f7575fed967e1b597039c617d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:15:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b9d1bbe62e7bbab3b33fb72341e5485cac429f7575fed967e1b597039c617d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:15:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b9d1bbe62e7bbab3b33fb72341e5485cac429f7575fed967e1b597039c617d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:15:44 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b9d1bbe62e7bbab3b33fb72341e5485cac429f7575fed967e1b597039c617d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:15:44 np0005604375 podman[245806]: 2026-02-01 15:15:44.944398782 +0000 UTC m=+0.100435796 container init 774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  1 10:15:44 np0005604375 podman[245806]: 2026-02-01 15:15:44.949011722 +0000 UTC m=+0.105048676 container start 774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  1 10:15:44 np0005604375 podman[245806]: 2026-02-01 15:15:44.953878439 +0000 UTC m=+0.109915503 container attach 774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  1 10:15:44 np0005604375 podman[245806]: 2026-02-01 15:15:44.866264914 +0000 UTC m=+0.022301938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:15:45 np0005604375 adoring_rosalind[245822]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:15:45 np0005604375 adoring_rosalind[245822]: --> All data devices are unavailable
Feb  1 10:15:45 np0005604375 systemd[1]: libpod-774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3.scope: Deactivated successfully.
Feb  1 10:15:45 np0005604375 podman[245806]: 2026-02-01 15:15:45.316721177 +0000 UTC m=+0.472758141 container died 774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rosalind, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:15:45 np0005604375 systemd[1]: var-lib-containers-storage-overlay-0b9d1bbe62e7bbab3b33fb72341e5485cac429f7575fed967e1b597039c617d0-merged.mount: Deactivated successfully.
Feb  1 10:15:45 np0005604375 podman[245806]: 2026-02-01 15:15:45.359314105 +0000 UTC m=+0.515351069 container remove 774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rosalind, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  1 10:15:45 np0005604375 systemd[1]: libpod-conmon-774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3.scope: Deactivated successfully.
Feb  1 10:15:45 np0005604375 podman[245914]: 2026-02-01 15:15:45.749290556 +0000 UTC m=+0.037114165 container create c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cartwright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  1 10:15:45 np0005604375 systemd[1]: Started libpod-conmon-c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce.scope.
Feb  1 10:15:45 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:15:45 np0005604375 podman[245914]: 2026-02-01 15:15:45.815692874 +0000 UTC m=+0.103516493 container init c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cartwright, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:15:45 np0005604375 podman[245914]: 2026-02-01 15:15:45.821129597 +0000 UTC m=+0.108953256 container start c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:15:45 np0005604375 systemd[1]: libpod-c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce.scope: Deactivated successfully.
Feb  1 10:15:45 np0005604375 happy_cartwright[245930]: 167 167
Feb  1 10:15:45 np0005604375 conmon[245930]: conmon c1cde0c14b34b768c508 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce.scope/container/memory.events
Feb  1 10:15:45 np0005604375 podman[245914]: 2026-02-01 15:15:45.734470829 +0000 UTC m=+0.022294478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:15:45 np0005604375 podman[245914]: 2026-02-01 15:15:45.872485301 +0000 UTC m=+0.160308930 container attach c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  1 10:15:45 np0005604375 podman[245914]: 2026-02-01 15:15:45.873049917 +0000 UTC m=+0.160873556 container died c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  1 10:15:46 np0005604375 systemd[1]: var-lib-containers-storage-overlay-db7ef22c5e7056c085f3644c55d68a49b8fe91b47cb51abaf94a6a6328ea6651-merged.mount: Deactivated successfully.
Feb  1 10:15:46 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:15:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:15:46 np0005604375 podman[245914]: 2026-02-01 15:15:46.139012739 +0000 UTC m=+0.426836348 container remove c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cartwright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:15:46 np0005604375 systemd[1]: libpod-conmon-c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce.scope: Deactivated successfully.
Feb  1 10:15:46 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f'.
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:15:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/.meta.tmp'
Feb  1 10:15:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/.meta.tmp' to config b'/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/.meta'
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.164414) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958946164451, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 894, "num_deletes": 258, "total_data_size": 969731, "memory_usage": 986024, "flush_reason": "Manual Compaction"}
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Feb  1 10:15:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958946170493, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 958900, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18634, "largest_seqno": 19527, "table_properties": {"data_size": 954466, "index_size": 1958, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10584, "raw_average_key_size": 19, "raw_value_size": 945042, "raw_average_value_size": 1730, "num_data_blocks": 88, "num_entries": 546, "num_filter_entries": 546, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958889, "oldest_key_time": 1769958889, "file_creation_time": 1769958946, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 6108 microseconds, and 2771 cpu microseconds.
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.170525) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 958900 bytes OK
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.170537) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.172402) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.172414) EVENT_LOG_v1 {"time_micros": 1769958946172410, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.172427) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 965183, prev total WAL file size 965183, number of live WAL files 2.
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.172711) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(936KB)], [41(9332KB)]
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958946172748, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 10515328, "oldest_snapshot_seqno": -1}
Feb  1 10:15:46 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "format": "json"}]: dispatch
Feb  1 10:15:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:15:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4614 keys, 10396351 bytes, temperature: kUnknown
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958946234216, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 10396351, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10360294, "index_size": 23403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 113351, "raw_average_key_size": 24, "raw_value_size": 10271967, "raw_average_value_size": 2226, "num_data_blocks": 986, "num_entries": 4614, "num_filter_entries": 4614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958946, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.234476) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 10396351 bytes
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.235979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.7 rd, 168.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.1 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(21.8) write-amplify(10.8) OK, records in: 5146, records dropped: 532 output_compression: NoCompression
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.236000) EVENT_LOG_v1 {"time_micros": 1769958946235990, "job": 20, "event": "compaction_finished", "compaction_time_micros": 61587, "compaction_time_cpu_micros": 14606, "output_level": 6, "num_output_files": 1, "total_output_size": 10396351, "num_input_records": 5146, "num_output_records": 4614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958946236187, "job": 20, "event": "table_file_deletion", "file_number": 43}
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958946237055, "job": 20, "event": "table_file_deletion", "file_number": 41}
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.172641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.237083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.237122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.237124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.237126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:15:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.237129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:15:46 np0005604375 podman[245957]: 2026-02-01 15:15:46.258217373 +0000 UTC m=+0.029747498 container create 3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jackson, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  1 10:15:46 np0005604375 systemd[1]: Started libpod-conmon-3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531.scope.
Feb  1 10:15:46 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:15:46 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a2f3942c79a9508cf0bb6fc7adf1d93153e7749935d6f0f0af41384c46d2c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:15:46 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a2f3942c79a9508cf0bb6fc7adf1d93153e7749935d6f0f0af41384c46d2c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:15:46 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a2f3942c79a9508cf0bb6fc7adf1d93153e7749935d6f0f0af41384c46d2c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:15:46 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a2f3942c79a9508cf0bb6fc7adf1d93153e7749935d6f0f0af41384c46d2c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:15:46 np0005604375 podman[245957]: 2026-02-01 15:15:46.330455435 +0000 UTC m=+0.101985570 container init 3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  1 10:15:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s wr, 4 op/s
Feb  1 10:15:46 np0005604375 podman[245957]: 2026-02-01 15:15:46.340316122 +0000 UTC m=+0.111846257 container start 3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:15:46 np0005604375 podman[245957]: 2026-02-01 15:15:46.343456101 +0000 UTC m=+0.114986256 container attach 3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:15:46 np0005604375 podman[245957]: 2026-02-01 15:15:46.246131403 +0000 UTC m=+0.017661558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]: {
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:    "0": [
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:        {
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "devices": [
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "/dev/loop3"
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            ],
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_name": "ceph_lv0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_size": "21470642176",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "name": "ceph_lv0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "tags": {
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.cluster_name": "ceph",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.crush_device_class": "",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.encrypted": "0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.objectstore": "bluestore",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.osd_id": "0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.type": "block",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.vdo": "0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.with_tpm": "0"
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            },
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "type": "block",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "vg_name": "ceph_vg0"
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:        }
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:    ],
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:    "1": [
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:        {
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "devices": [
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "/dev/loop4"
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            ],
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_name": "ceph_lv1",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_size": "21470642176",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "name": "ceph_lv1",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "tags": {
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.cluster_name": "ceph",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.crush_device_class": "",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.encrypted": "0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.objectstore": "bluestore",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.osd_id": "1",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.type": "block",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.vdo": "0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.with_tpm": "0"
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            },
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "type": "block",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "vg_name": "ceph_vg1"
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:        }
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:    ],
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:    "2": [
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:        {
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "devices": [
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "/dev/loop5"
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            ],
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_name": "ceph_lv2",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_size": "21470642176",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "name": "ceph_lv2",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "tags": {
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.cluster_name": "ceph",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.crush_device_class": "",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.encrypted": "0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.objectstore": "bluestore",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.osd_id": "2",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.type": "block",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.vdo": "0",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:                "ceph.with_tpm": "0"
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            },
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "type": "block",
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:            "vg_name": "ceph_vg2"
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:        }
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]:    ]
Feb  1 10:15:46 np0005604375 ecstatic_jackson[245974]: }
Feb  1 10:15:46 np0005604375 systemd[1]: libpod-3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531.scope: Deactivated successfully.
Feb  1 10:15:46 np0005604375 podman[245957]: 2026-02-01 15:15:46.624832916 +0000 UTC m=+0.396363041 container died 3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:15:46 np0005604375 systemd[1]: var-lib-containers-storage-overlay-85a2f3942c79a9508cf0bb6fc7adf1d93153e7749935d6f0f0af41384c46d2c1-merged.mount: Deactivated successfully.
Feb  1 10:15:46 np0005604375 podman[245957]: 2026-02-01 15:15:46.659966235 +0000 UTC m=+0.431496370 container remove 3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jackson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:15:46 np0005604375 systemd[1]: libpod-conmon-3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531.scope: Deactivated successfully.
Feb  1 10:15:46 np0005604375 podman[246060]: 2026-02-01 15:15:46.990636267 +0000 UTC m=+0.035748897 container create 935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:15:47 np0005604375 systemd[1]: Started libpod-conmon-935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92.scope.
Feb  1 10:15:47 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:15:47 np0005604375 podman[246060]: 2026-02-01 15:15:47.053004082 +0000 UTC m=+0.098116752 container init 935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  1 10:15:47 np0005604375 podman[246060]: 2026-02-01 15:15:47.057689113 +0000 UTC m=+0.102801753 container start 935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:15:47 np0005604375 sharp_jang[246077]: 167 167
Feb  1 10:15:47 np0005604375 systemd[1]: libpod-935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92.scope: Deactivated successfully.
Feb  1 10:15:47 np0005604375 podman[246060]: 2026-02-01 15:15:47.061351776 +0000 UTC m=+0.106464446 container attach 935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:15:47 np0005604375 podman[246060]: 2026-02-01 15:15:47.061577383 +0000 UTC m=+0.106690013 container died 935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:15:47 np0005604375 podman[246060]: 2026-02-01 15:15:46.97615595 +0000 UTC m=+0.021268600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:15:47 np0005604375 systemd[1]: var-lib-containers-storage-overlay-8d063ab31ca2cf49dc757f7992d4492b5869eba17454dcc8ea1eafa35bbffb8a-merged.mount: Deactivated successfully.
Feb  1 10:15:47 np0005604375 podman[246060]: 2026-02-01 15:15:47.100247001 +0000 UTC m=+0.145359631 container remove 935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:15:47 np0005604375 systemd[1]: libpod-conmon-935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92.scope: Deactivated successfully.
Feb  1 10:15:47 np0005604375 podman[246102]: 2026-02-01 15:15:47.234215889 +0000 UTC m=+0.045524501 container create 0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  1 10:15:47 np0005604375 systemd[1]: Started libpod-conmon-0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827.scope.
Feb  1 10:15:47 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:15:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91185da7210f237de3742dfb696ba736213549b1dd0a14a93dd1d9f626c60955/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:15:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91185da7210f237de3742dfb696ba736213549b1dd0a14a93dd1d9f626c60955/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:15:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91185da7210f237de3742dfb696ba736213549b1dd0a14a93dd1d9f626c60955/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:15:47 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91185da7210f237de3742dfb696ba736213549b1dd0a14a93dd1d9f626c60955/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:15:47 np0005604375 podman[246102]: 2026-02-01 15:15:47.313936492 +0000 UTC m=+0.125245114 container init 0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  1 10:15:47 np0005604375 podman[246102]: 2026-02-01 15:15:47.218751134 +0000 UTC m=+0.030059776 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:15:47 np0005604375 podman[246102]: 2026-02-01 15:15:47.324095568 +0000 UTC m=+0.135404210 container start 0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  1 10:15:47 np0005604375 podman[246102]: 2026-02-01 15:15:47.327927696 +0000 UTC m=+0.139236298 container attach 0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:15:47 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:15:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb  1 10:15:47 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea'.
Feb  1 10:15:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/.meta.tmp'
Feb  1 10:15:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/.meta.tmp' to config b'/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/.meta'
Feb  1 10:15:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb  1 10:15:47 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "format": "json"}]: dispatch
Feb  1 10:15:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb  1 10:15:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb  1 10:15:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:15:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:15:47 np0005604375 nova_compute[238794]: 2026-02-01 15:15:47.558 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:15:47 np0005604375 nova_compute[238794]: 2026-02-01 15:15:47.559 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:15:47 np0005604375 nova_compute[238794]: 2026-02-01 15:15:47.559 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:15:47 np0005604375 nova_compute[238794]: 2026-02-01 15:15:47.559 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:15:47 np0005604375 nova_compute[238794]: 2026-02-01 15:15:47.575 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:15:47 np0005604375 nova_compute[238794]: 2026-02-01 15:15:47.575 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:15:47 np0005604375 lvm[246195]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:15:47 np0005604375 lvm[246195]: VG ceph_vg0 finished
Feb  1 10:15:47 np0005604375 lvm[246198]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:15:47 np0005604375 lvm[246198]: VG ceph_vg1 finished
Feb  1 10:15:47 np0005604375 lvm[246200]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:15:47 np0005604375 lvm[246200]: VG ceph_vg2 finished
Feb  1 10:15:48 np0005604375 fervent_agnesi[246119]: {}
Feb  1 10:15:48 np0005604375 systemd[1]: libpod-0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827.scope: Deactivated successfully.
Feb  1 10:15:48 np0005604375 systemd[1]: libpod-0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827.scope: Consumed 1.057s CPU time.
Feb  1 10:15:48 np0005604375 podman[246102]: 2026-02-01 15:15:48.045932695 +0000 UTC m=+0.857241317 container died 0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  1 10:15:48 np0005604375 systemd[1]: var-lib-containers-storage-overlay-91185da7210f237de3742dfb696ba736213549b1dd0a14a93dd1d9f626c60955-merged.mount: Deactivated successfully.
Feb  1 10:15:48 np0005604375 podman[246102]: 2026-02-01 15:15:48.082960836 +0000 UTC m=+0.894269438 container remove 0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:15:48 np0005604375 systemd[1]: libpod-conmon-0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827.scope: Deactivated successfully.
Feb  1 10:15:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:15:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:15:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:15:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:15:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:15:48 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:15:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s wr, 3 op/s
Feb  1 10:15:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:15:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:15:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:15:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:15:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:15:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/121881e8-6836-4fd0-8d00-03d9039e7468'.
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp'
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp' to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta'
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "format": "json"}]: dispatch
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb  1 10:15:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:15:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:15:49 np0005604375 nova_compute[238794]: 2026-02-01 15:15:49.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:15:49 np0005604375 nova_compute[238794]: 2026-02-01 15:15:49.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "Joe", "tenant_id": "2731ddbed05046f3bee55c8f307163b2", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, tenant_id:2731ddbed05046f3bee55c8f307163b2, vol_name:cephfs) < ""
Feb  1 10:15:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Feb  1 10:15:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, tenant_id:2731ddbed05046f3bee55c8f307163b2, vol_name:cephfs) < ""
Feb  1 10:15:49 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:49.666+0000 7f8267782640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Feb  1 10:15:49 np0005604375 ceph-mgr[75469]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Feb  1 10:15:50 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb  1 10:15:50 np0005604375 nova_compute[238794]: 2026-02-01 15:15:50.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:15:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s wr, 3 op/s
Feb  1 10:15:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:15:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2222637731' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:15:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:15:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2222637731' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:15:50 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:15:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:15:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:15:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:15:51 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb  1 10:15:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:15:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:15:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:15:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:15:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:15:51 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:15:51 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:15:51 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:15:51 np0005604375 nova_compute[238794]: 2026-02-01 15:15:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:15:51 np0005604375 nova_compute[238794]: 2026-02-01 15:15:51.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:15:52 np0005604375 nova_compute[238794]: 2026-02-01 15:15:52.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:15:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s wr, 6 op/s
Feb  1 10:15:52 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "snap_name": "09169dc3-0948-42ec-b7eb-9bb0391d7a50", "format": "json"}]: dispatch
Feb  1 10:15:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb  1 10:15:52 np0005604375 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  1 10:15:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb  1 10:15:53 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "tempest-cephx-id-403687319", "tenant_id": "2731ddbed05046f3bee55c8f307163b2", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:15:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-403687319, format:json, prefix:fs subvolume authorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, tenant_id:2731ddbed05046f3bee55c8f307163b2, vol_name:cephfs) < ""
Feb  1 10:15:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-403687319", "format": "json"} v 0)
Feb  1 10:15:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-403687319", "format": "json"} : dispatch
Feb  1 10:15:53 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-403687319 with tenant 2731ddbed05046f3bee55c8f307163b2
Feb  1 10:15:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-403687319", "caps": ["mds", "allow rw path=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_66ba7d88-ae35-42fd-932a-84cc5334b587", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:15:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-403687319", "caps": ["mds", "allow rw path=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_66ba7d88-ae35-42fd-932a-84cc5334b587", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:15:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-403687319", "caps": ["mds", "allow rw path=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_66ba7d88-ae35-42fd-932a-84cc5334b587", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:15:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-403687319, format:json, prefix:fs subvolume authorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, tenant_id:2731ddbed05046f3bee55c8f307163b2, vol_name:cephfs) < ""
Feb  1 10:15:53 np0005604375 nova_compute[238794]: 2026-02-01 15:15:53.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:15:53 np0005604375 nova_compute[238794]: 2026-02-01 15:15:53.344 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:15:53 np0005604375 nova_compute[238794]: 2026-02-01 15:15:53.344 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:15:53 np0005604375 nova_compute[238794]: 2026-02-01 15:15:53.344 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:15:53 np0005604375 nova_compute[238794]: 2026-02-01 15:15:53.345 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:15:53 np0005604375 nova_compute[238794]: 2026-02-01 15:15:53.345 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:15:53 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-403687319", "format": "json"} : dispatch
Feb  1 10:15:53 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-403687319", "caps": ["mds", "allow rw path=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_66ba7d88-ae35-42fd-932a-84cc5334b587", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:15:53 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-403687319", "caps": ["mds", "allow rw path=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_66ba7d88-ae35-42fd-932a-84cc5334b587", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:15:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:15:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111579567' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:15:53 np0005604375 nova_compute[238794]: 2026-02-01 15:15:53.882 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:15:54 np0005604375 nova_compute[238794]: 2026-02-01 15:15:54.041 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:15:54 np0005604375 nova_compute[238794]: 2026-02-01 15:15:54.042 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5113MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:15:54 np0005604375 nova_compute[238794]: 2026-02-01 15:15:54.042 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:15:54 np0005604375 nova_compute[238794]: 2026-02-01 15:15:54.042 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:15:54 np0005604375 nova_compute[238794]: 2026-02-01 15:15:54.098 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:15:54 np0005604375 nova_compute[238794]: 2026-02-01 15:15:54.098 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:15:54 np0005604375 nova_compute[238794]: 2026-02-01 15:15:54.120 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s wr, 5 op/s
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb  1 10:15:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:15:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:15:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb  1 10:15:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:15:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea
Feb  1 10:15:54 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea],prefix=session evict} (starting...)
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb  1 10:15:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:15:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1113531424' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:15:54 np0005604375 nova_compute[238794]: 2026-02-01 15:15:54.663 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:15:54 np0005604375 nova_compute[238794]: 2026-02-01 15:15:54.670 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:15:54 np0005604375 nova_compute[238794]: 2026-02-01 15:15:54.688 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:15:54 np0005604375 nova_compute[238794]: 2026-02-01 15:15:54.692 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:15:54 np0005604375 nova_compute[238794]: 2026-02-01 15:15:54.692 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "format": "json"}]: dispatch
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bcfe09f7-b95d-44d4-88ff-9ddff7f38222' of type subvolume
Feb  1 10:15:54 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:54.710+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bcfe09f7-b95d-44d4-88ff-9ddff7f38222' of type subvolume
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "force": true, "format": "json"}]: dispatch
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222'' moved to trashcan
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:15:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb  1 10:15:55 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:15:55 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:15:55 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "snap_name": "09169dc3-0948-42ec-b7eb-9bb0391d7a50", "target_sub_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, target_sub_name:0c14589f-b0af-4342-affb-d81a226bb4b2, vol_name:cephfs) < ""
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/e8f29e95-6292-426b-b4e0-b055082f1eee'.
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp'
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp' to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta'
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 8a526b98-dcfb-4533-ae00-f05a7d3a9b2d for path b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2'
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp'
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp' to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta'
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, target_sub_name:0c14589f-b0af-4342-affb-d81a226bb4b2, vol_name:cephfs) < ""
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0c14589f-b0af-4342-affb-d81a226bb4b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:55 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:55.995+0000 7f826bf8b640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:55 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:55.995+0000 7f826bf8b640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:55 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:55.995+0000 7f826bf8b640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:55 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:55.995+0000 7f826bf8b640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:55 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:55.995+0000 7f826bf8b640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0c14589f-b0af-4342-affb-d81a226bb4b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 0c14589f-b0af-4342-affb-d81a226bb4b2)
Feb  1 10:15:56 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.018+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.018+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.018+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.018+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.018+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 0c14589f-b0af-4342-affb-d81a226bb4b2) -- by 0 seconds
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp'
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp' to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta'
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.170090) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958956170118, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 414, "num_deletes": 250, "total_data_size": 250156, "memory_usage": 258952, "flush_reason": "Manual Compaction"}
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958956173741, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 247704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19528, "largest_seqno": 19941, "table_properties": {"data_size": 245121, "index_size": 619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6823, "raw_average_key_size": 20, "raw_value_size": 239890, "raw_average_value_size": 709, "num_data_blocks": 26, "num_entries": 338, "num_filter_entries": 338, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958947, "oldest_key_time": 1769958947, "file_creation_time": 1769958956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 3722 microseconds, and 1469 cpu microseconds.
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.173806) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 247704 bytes OK
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.173830) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.175517) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.175547) EVENT_LOG_v1 {"time_micros": 1769958956175539, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.175570) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 247464, prev total WAL file size 247464, number of live WAL files 2.
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.176002) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(241KB)], [44(10152KB)]
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958956176052, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 10644055, "oldest_snapshot_seqno": -1}
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4439 keys, 7303414 bytes, temperature: kUnknown
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958956217028, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7303414, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7272989, "index_size": 18219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 110283, "raw_average_key_size": 24, "raw_value_size": 7192137, "raw_average_value_size": 1620, "num_data_blocks": 760, "num_entries": 4439, "num_filter_entries": 4439, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.217283) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7303414 bytes
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.218482) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 259.3 rd, 177.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.9 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(72.5) write-amplify(29.5) OK, records in: 4952, records dropped: 513 output_compression: NoCompression
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.218512) EVENT_LOG_v1 {"time_micros": 1769958956218499, "job": 22, "event": "compaction_finished", "compaction_time_micros": 41055, "compaction_time_cpu_micros": 23472, "output_level": 6, "num_output_files": 1, "total_output_size": 7303414, "num_input_records": 4952, "num_output_records": 4439, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958956218678, "job": 22, "event": "table_file_deletion", "file_number": 46}
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958956220078, "job": 22, "event": "table_file_deletion", "file_number": 44}
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.175945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.220115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.220121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.220124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.220127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:15:56 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.220130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s wr, 9 op/s
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "Joe", "format": "json"}]: dispatch
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:15:56 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.994+0000 7f8248c77640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.994+0000 7f8248c77640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.994+0000 7f8248c77640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.994+0000 7f8248c77640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.994+0000 7f8248c77640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:56 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume '66ba7d88-ae35-42fd-932a-84cc5334b587'
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "Joe", "format": "json"}]: dispatch
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.snap/09169dc3-0948-42ec-b7eb-9bb0391d7a50/121881e8-6836-4fd0-8d00-03d9039e7468' to b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/e8f29e95-6292-426b-b4e0-b055082f1eee'
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f
Feb  1 10:15:57 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f],prefix=session evict} (starting...)
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp'
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp' to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta'
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.clone_index] untracking 8a526b98-dcfb-4533-ae00-f05a7d3a9b2d
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp'
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp' to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta'
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp'
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp' to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta'
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 0c14589f-b0af-4342-affb-d81a226bb4b2)
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb  1 10:15:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0c14589f-b0af-4342-affb-d81a226bb4b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:57 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.viosrg(active, since 25m)
Feb  1 10:15:57 np0005604375 podman[246324]: 2026-02-01 15:15:57.970043906 +0000 UTC m=+0.054083223 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  1 10:15:58 np0005604375 podman[246325]: 2026-02-01 15:15:58.005950706 +0000 UTC m=+0.088924763 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller)
Feb  1 10:15:58 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Feb  1 10:15:58 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Feb  1 10:15:58 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Feb  1 10:15:58 np0005604375 ceph-mgr[75469]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Feb  1 10:15:58 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Feb  1 10:15:58 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f82797d15e0>
Feb  1 10:15:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s wr, 7 op/s
Feb  1 10:15:58 np0005604375 ceph-mgr[75469]: [progress INFO root] Writing back 18 completed events
Feb  1 10:15:58 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  1 10:15:58 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:15:59 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.viosrg(active, since 25m)
Feb  1 10:15:59 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:15:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0c14589f-b0af-4342-affb-d81a226bb4b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:15:59 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb  1 10:15:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0c14589f-b0af-4342-affb-d81a226bb4b2, vol_name:cephfs) < ""
Feb  1 10:15:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0c14589f-b0af-4342-affb-d81a226bb4b2, vol_name:cephfs) < ""
Feb  1 10:15:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:15:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:15:59 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:15:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb  1 10:15:59 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed'.
Feb  1 10:15:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/.meta.tmp'
Feb  1 10:15:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/.meta.tmp' to config b'/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/.meta'
Feb  1 10:15:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb  1 10:15:59 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "format": "json"}]: dispatch
Feb  1 10:15:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb  1 10:15:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb  1 10:15:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:15:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:16:00 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "tempest-cephx-id-403687319", "format": "json"}]: dispatch
Feb  1 10:16:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-403687319, format:json, prefix:fs subvolume deauthorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:16:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-403687319", "format": "json"} v 0)
Feb  1 10:16:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-403687319", "format": "json"} : dispatch
Feb  1 10:16:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-403687319"} v 0)
Feb  1 10:16:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-403687319"} : dispatch
Feb  1 10:16:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-403687319"}]': finished
Feb  1 10:16:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-403687319, format:json, prefix:fs subvolume deauthorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:16:00 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "tempest-cephx-id-403687319", "format": "json"}]: dispatch
Feb  1 10:16:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-403687319, format:json, prefix:fs subvolume evict, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:16:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-403687319, client_metadata.root=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f
Feb  1 10:16:00 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-403687319,client_metadata.root=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f],prefix=session evict} (starting...)
Feb  1 10:16:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:16:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-403687319, format:json, prefix:fs subvolume evict, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:16:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s wr, 7 op/s
Feb  1 10:16:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-403687319", "format": "json"} : dispatch
Feb  1 10:16:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-403687319"} : dispatch
Feb  1 10:16:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-403687319"}]': finished
Feb  1 10:16:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:16:01 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:16:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:16:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:16:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:01 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb  1 10:16:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_67e50812-4602-4dc4-b942-a78b28ddb769", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:16:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_67e50812-4602-4dc4-b942-a78b28ddb769", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_67e50812-4602-4dc4-b942-a78b28ddb769", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:16:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_67e50812-4602-4dc4-b942-a78b28ddb769", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_67e50812-4602-4dc4-b942-a78b28ddb769", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 114 KiB/s wr, 15 op/s
Feb  1 10:16:03 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "auth_id": "Joe", "format": "json"}]: dispatch
Feb  1 10:16:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb  1 10:16:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Feb  1 10:16:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb  1 10:16:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0)
Feb  1 10:16:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Feb  1 10:16:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Feb  1 10:16:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb  1 10:16:03 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "auth_id": "Joe", "format": "json"}]: dispatch
Feb  1 10:16:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb  1 10:16:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a
Feb  1 10:16:03 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a],prefix=session evict} (starting...)
Feb  1 10:16:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:16:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 92 KiB/s wr, 12 op/s
Feb  1 10:16:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb  1 10:16:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Feb  1 10:16:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb  1 10:16:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:16:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb  1 10:16:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:16:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed
Feb  1 10:16:04 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed],prefix=session evict} (starting...)
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "format": "json"}]: dispatch
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:67e50812-4602-4dc4-b942-a78b28ddb769, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:67e50812-4602-4dc4-b942-a78b28ddb769, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:04 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:04.886+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '67e50812-4602-4dc4-b942-a78b28ddb769' of type subvolume
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '67e50812-4602-4dc4-b942-a78b28ddb769' of type subvolume
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "force": true, "format": "json"}]: dispatch
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769'' moved to trashcan
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:16:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb  1 10:16:05 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb  1 10:16:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0c14589f-b0af-4342-affb-d81a226bb4b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0c14589f-b0af-4342-affb-d81a226bb4b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:05 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "force": true, "format": "json"}]: dispatch
Feb  1 10:16:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0c14589f-b0af-4342-affb-d81a226bb4b2, vol_name:cephfs) < ""
Feb  1 10:16:05 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:05 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:16:05 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:16:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2'' moved to trashcan
Feb  1 10:16:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:16:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0c14589f-b0af-4342-affb-d81a226bb4b2, vol_name:cephfs) < ""
Feb  1 10:16:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:16:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 142 KiB/s wr, 18 op/s
Feb  1 10:16:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "admin", "tenant_id": "e483891a9fd042d4a571a3d4655dc685", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:16:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, tenant_id:e483891a9fd042d4a571a3d4655dc685, vol_name:cephfs) < ""
Feb  1 10:16:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0)
Feb  1 10:16:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Feb  1 10:16:07 np0005604375 ceph-mgr[75469]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Feb  1 10:16:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, tenant_id:e483891a9fd042d4a571a3d4655dc685, vol_name:cephfs) < ""
Feb  1 10:16:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:07.234+0000 7f8267782640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Feb  1 10:16:07 np0005604375 ceph-mgr[75469]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Feb  1 10:16:07 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Feb  1 10:16:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:16:07.811 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:16:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:16:07.812 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:16:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:16:07.812 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:16:08 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:16:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb  1 10:16:08 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed'.
Feb  1 10:16:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/.meta.tmp'
Feb  1 10:16:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/.meta.tmp' to config b'/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/.meta'
Feb  1 10:16:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb  1 10:16:08 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "format": "json"}]: dispatch
Feb  1 10:16:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb  1 10:16:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb  1 10:16:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:16:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:16:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 99 KiB/s wr, 13 op/s
Feb  1 10:16:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "snap_name": "09169dc3-0948-42ec-b7eb-9bb0391d7a50_9edff701-b45a-4597-ae78-08c7150fd6a2", "force": true, "format": "json"}]: dispatch
Feb  1 10:16:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50_9edff701-b45a-4597-ae78-08c7150fd6a2, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb  1 10:16:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp'
Feb  1 10:16:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp' to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta'
Feb  1 10:16:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50_9edff701-b45a-4597-ae78-08c7150fd6a2, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb  1 10:16:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "snap_name": "09169dc3-0948-42ec-b7eb-9bb0391d7a50", "force": true, "format": "json"}]: dispatch
Feb  1 10:16:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb  1 10:16:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp'
Feb  1 10:16:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp' to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta'
Feb  1 10:16:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb  1 10:16:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 99 KiB/s wr, 13 op/s
Feb  1 10:16:10 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "david", "tenant_id": "e483891a9fd042d4a571a3d4655dc685", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:16:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, tenant_id:e483891a9fd042d4a571a3d4655dc685, vol_name:cephfs) < ""
Feb  1 10:16:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Feb  1 10:16:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb  1 10:16:10 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID david with tenant e483891a9fd042d4a571a3d4655dc685
Feb  1 10:16:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f13e6643-de3c-4836-add7-2244ceca3720", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:16:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f13e6643-de3c-4836-add7-2244ceca3720", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f13e6643-de3c-4836-add7-2244ceca3720", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, tenant_id:e483891a9fd042d4a571a3d4655dc685, vol_name:cephfs) < ""
Feb  1 10:16:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:16:11 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:16:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:16:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:16:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:11 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb  1 10:16:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_90ad7db4-01ea-4e02-bd1a-db4113b80713", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:16:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_90ad7db4-01ea-4e02-bd1a-db4113b80713", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_90ad7db4-01ea-4e02-bd1a-db4113b80713", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:16:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb  1 10:16:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f13e6643-de3c-4836-add7-2244ceca3720", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f13e6643-de3c-4836-add7-2244ceca3720", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_90ad7db4-01ea-4e02-bd1a-db4113b80713", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_90ad7db4-01ea-4e02-bd1a-db4113b80713", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 122 KiB/s wr, 18 op/s
Feb  1 10:16:12 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "format": "json"}]: dispatch
Feb  1 10:16:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:12 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:12.565+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a8330130-cd80-47bb-ab6d-4bb6b88724d1' of type subvolume
Feb  1 10:16:12 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a8330130-cd80-47bb-ab6d-4bb6b88724d1' of type subvolume
Feb  1 10:16:12 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "force": true, "format": "json"}]: dispatch
Feb  1 10:16:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb  1 10:16:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1'' moved to trashcan
Feb  1 10:16:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:16:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb  1 10:16:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 74 KiB/s wr, 10 op/s
Feb  1 10:16:14 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:16:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb  1 10:16:14 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2/52e2d3d9-e8df-4982-b844-eab1575eaea8'.
Feb  1 10:16:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2/.meta.tmp'
Feb  1 10:16:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2/.meta.tmp' to config b'/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2/.meta'
Feb  1 10:16:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb  1 10:16:14 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "format": "json"}]: dispatch
Feb  1 10:16:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb  1 10:16:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb  1 10:16:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:16:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:16:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Feb  1 10:16:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Feb  1 10:16:14 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb  1 10:16:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:16:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb  1 10:16:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:16:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed
Feb  1 10:16:15 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed],prefix=session evict} (starting...)
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "format": "json"}]: dispatch
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:15 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:15.776+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '90ad7db4-01ea-4e02-bd1a-db4113b80713' of type subvolume
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '90ad7db4-01ea-4e02-bd1a-db4113b80713' of type subvolume
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "force": true, "format": "json"}]: dispatch
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713'' moved to trashcan
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:16:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb  1 10:16:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:16:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:16:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:16:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 93 KiB/s wr, 13 op/s
Feb  1 10:16:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:16:17
Feb  1 10:16:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:16:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:16:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'vms', 'default.rgw.log', 'default.rgw.meta', 'images', 'default.rgw.control', 'backups']
Feb  1 10:16:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "auth_id": "david", "tenant_id": "2731ddbed05046f3bee55c8f307163b2", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, tenant_id:2731ddbed05046f3bee55c8f307163b2, vol_name:cephfs) < ""
Feb  1 10:16:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Feb  1 10:16:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, tenant_id:2731ddbed05046f3bee55c8f307163b2, vol_name:cephfs) < ""
Feb  1 10:16:18 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:18.197+0000 7f8267782640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 93 KiB/s wr, 13 op/s
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:16:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:16:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:18 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb  1 10:16:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:16:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:16:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb  1 10:16:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:16:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:16:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 93 KiB/s wr, 13 op/s
Feb  1 10:16:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:16:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Feb  1 10:16:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Feb  1 10:16:21 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Feb  1 10:16:21 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:16:21.283 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  1 10:16:21 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:16:21.285 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  1 10:16:21 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "auth_id": "david", "format": "json"}]: dispatch
Feb  1 10:16:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb  1 10:16:21 np0005604375 ceph-mgr[75469]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '53873f8b-858c-4fab-a187-a58acce7cad2'
Feb  1 10:16:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb  1 10:16:21 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "auth_id": "david", "format": "json"}]: dispatch
Feb  1 10:16:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb  1 10:16:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2/52e2d3d9-e8df-4982-b844-eab1575eaea8
Feb  1 10:16:21 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2/52e2d3d9-e8df-4982-b844-eab1575eaea8],prefix=session evict} (starting...)
Feb  1 10:16:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:16:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb  1 10:16:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 112 KiB/s wr, 14 op/s
Feb  1 10:16:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:16:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:16:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb  1 10:16:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:16:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:16:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:16:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb
Feb  1 10:16:22 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb],prefix=session evict} (starting...)
Feb  1 10:16:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:16:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:16:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:16:23 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:16:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:16:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:16:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:23 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb  1 10:16:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:16:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:16:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 648 B/s rd, 95 KiB/s wr, 12 op/s
Feb  1 10:16:25 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "david", "format": "json"}]: dispatch
Feb  1 10:16:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb  1 10:16:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Feb  1 10:16:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb  1 10:16:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0)
Feb  1 10:16:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Feb  1 10:16:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Feb  1 10:16:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb  1 10:16:25 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "david", "format": "json"}]: dispatch
Feb  1 10:16:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb  1 10:16:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23
Feb  1 10:16:25 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23],prefix=session evict} (starting...)
Feb  1 10:16:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:16:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb  1 10:16:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb  1 10:16:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Feb  1 10:16:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Feb  1 10:16:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:16:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 10 op/s
Feb  1 10:16:27 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:16:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:16:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:27 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb  1 10:16:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:16:27 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:16:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:27 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:16:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb
Feb  1 10:16:27 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb],prefix=session evict} (starting...)
Feb  1 10:16:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:16:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:27 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:27 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:16:27 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659612794123319 of space, bias 1.0, pg target 0.19978838382369957 quantized to 32 (current 32)
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00014578567209782184 of space, bias 4.0, pg target 0.17494280651738622 quantized to 16 (current 16)
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.994977860259165e-07 of space, bias 1.0, pg target 0.00020984933580777494 quantized to 32 (current 32)
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:16:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 10 op/s
Feb  1 10:16:29 np0005604375 podman[246379]: 2026-02-01 15:16:29.001051181 +0000 UTC m=+0.073497659 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Feb  1 10:16:29 np0005604375 podman[246380]: 2026-02-01 15:16:29.029923693 +0000 UTC m=+0.099052728 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller)
Feb  1 10:16:29 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:16:29.287 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  1 10:16:29 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "format": "json"}]: dispatch
Feb  1 10:16:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:53873f8b-858c-4fab-a187-a58acce7cad2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:53873f8b-858c-4fab-a187-a58acce7cad2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:29 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:29.783+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '53873f8b-858c-4fab-a187-a58acce7cad2' of type subvolume
Feb  1 10:16:29 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '53873f8b-858c-4fab-a187-a58acce7cad2' of type subvolume
Feb  1 10:16:29 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "force": true, "format": "json"}]: dispatch
Feb  1 10:16:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb  1 10:16:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2'' moved to trashcan
Feb  1 10:16:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:16:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb  1 10:16:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 10 op/s
Feb  1 10:16:30 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:16:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:16:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:16:30 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:30 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb  1 10:16:30 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:16:30 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:30 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:16:30 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:30 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:30 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:30 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0bd1c69e-9d87-420b-8cc7-eab8d429d2d0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:16:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, vol_name:cephfs) < ""
Feb  1 10:16:30 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0bd1c69e-9d87-420b-8cc7-eab8d429d2d0/e66436b8-aa27-44ad-a68b-5fc46f0da8d3'.
Feb  1 10:16:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0bd1c69e-9d87-420b-8cc7-eab8d429d2d0/.meta.tmp'
Feb  1 10:16:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0bd1c69e-9d87-420b-8cc7-eab8d429d2d0/.meta.tmp' to config b'/volumes/_nogroup/0bd1c69e-9d87-420b-8cc7-eab8d429d2d0/.meta'
Feb  1 10:16:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, vol_name:cephfs) < ""
Feb  1 10:16:31 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0bd1c69e-9d87-420b-8cc7-eab8d429d2d0", "format": "json"}]: dispatch
Feb  1 10:16:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, vol_name:cephfs) < ""
Feb  1 10:16:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, vol_name:cephfs) < ""
Feb  1 10:16:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:16:31 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:16:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:16:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 183 B/s rd, 90 KiB/s wr, 10 op/s
Feb  1 10:16:33 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "format": "json"}]: dispatch
Feb  1 10:16:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:66ba7d88-ae35-42fd-932a-84cc5334b587, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:66ba7d88-ae35-42fd-932a-84cc5334b587, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:33 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:33.345+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '66ba7d88-ae35-42fd-932a-84cc5334b587' of type subvolume
Feb  1 10:16:33 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '66ba7d88-ae35-42fd-932a-84cc5334b587' of type subvolume
Feb  1 10:16:33 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "force": true, "format": "json"}]: dispatch
Feb  1 10:16:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:16:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587'' moved to trashcan
Feb  1 10:16:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:16:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb  1 10:16:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 84 KiB/s wr, 10 op/s
Feb  1 10:16:34 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:16:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:16:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:34 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb  1 10:16:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:16:34 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:16:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:34 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:16:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb
Feb  1 10:16:34 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb],prefix=session evict} (starting...)
Feb  1 10:16:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:16:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:35 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:35 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:16:35 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:16:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0bd1c69e-9d87-420b-8cc7-eab8d429d2d0", "format": "json"}]: dispatch
Feb  1 10:16:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:35 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:35.570+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0bd1c69e-9d87-420b-8cc7-eab8d429d2d0' of type subvolume
Feb  1 10:16:35 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0bd1c69e-9d87-420b-8cc7-eab8d429d2d0' of type subvolume
Feb  1 10:16:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0bd1c69e-9d87-420b-8cc7-eab8d429d2d0", "force": true, "format": "json"}]: dispatch
Feb  1 10:16:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, vol_name:cephfs) < ""
Feb  1 10:16:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0bd1c69e-9d87-420b-8cc7-eab8d429d2d0'' moved to trashcan
Feb  1 10:16:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:16:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, vol_name:cephfs) < ""
Feb  1 10:16:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:16:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 120 KiB/s wr, 14 op/s
Feb  1 10:16:36 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "format": "json"}]: dispatch
Feb  1 10:16:36 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:36 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:36 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:36.911+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cc8298b6-cd36-4e3a-b5fa-1906378c83d8' of type subvolume
Feb  1 10:16:36 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cc8298b6-cd36-4e3a-b5fa-1906378c83d8' of type subvolume
Feb  1 10:16:36 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "force": true, "format": "json"}]: dispatch
Feb  1 10:16:36 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb  1 10:16:36 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8'' moved to trashcan
Feb  1 10:16:36 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:16:36 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb  1 10:16:37 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:16:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:16:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:16:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:37 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb  1 10:16:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:16:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb  1 10:16:38 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:38 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:38 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 9 op/s
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 9 op/s
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "admin", "format": "json"}]: dispatch
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb  1 10:16:40 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:40.534+0000 7f8267782640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f13e6643-de3c-4836-add7-2244ceca3720", "format": "json"}]: dispatch
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f13e6643-de3c-4836-add7-2244ceca3720, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f13e6643-de3c-4836-add7-2244ceca3720, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:40 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:40.623+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f13e6643-de3c-4836-add7-2244ceca3720' of type subvolume
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f13e6643-de3c-4836-add7-2244ceca3720' of type subvolume
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "force": true, "format": "json"}]: dispatch
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720'' moved to trashcan
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:16:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb  1 10:16:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:16:41 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:16:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb  1 10:16:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb  1 10:16:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:16:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:16:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:41 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb  1 10:16:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb
Feb  1 10:16:41 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb],prefix=session evict} (starting...)
Feb  1 10:16:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:16:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:42 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb  1 10:16:42 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb  1 10:16:42 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb  1 10:16:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 111 KiB/s wr, 14 op/s
Feb  1 10:16:43 np0005604375 nova_compute[238794]: 2026-02-01 15:16:43.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:16:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 73 KiB/s wr, 9 op/s
Feb  1 10:16:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:16:46 np0005604375 nova_compute[238794]: 2026-02-01 15:16:46.338 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:16:46 np0005604375 nova_compute[238794]: 2026-02-01 15:16:46.339 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:16:46 np0005604375 nova_compute[238794]: 2026-02-01 15:16:46.339 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:16:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 88 KiB/s wr, 12 op/s
Feb  1 10:16:46 np0005604375 nova_compute[238794]: 2026-02-01 15:16:46.361 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:16:46 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "format": "json"}]: dispatch
Feb  1 10:16:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:16:46 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:46.601+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7' of type subvolume
Feb  1 10:16:46 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7' of type subvolume
Feb  1 10:16:46 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "force": true, "format": "json"}]: dispatch
Feb  1 10:16:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7'' moved to trashcan
Feb  1 10:16:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:16:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb  1 10:16:47 np0005604375 nova_compute[238794]: 2026-02-01 15:16:47.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:16:47 np0005604375 nova_compute[238794]: 2026-02-01 15:16:47.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:16:47 np0005604375 nova_compute[238794]: 2026-02-01 15:16:47.352 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:16:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 52 KiB/s wr, 7 op/s
Feb  1 10:16:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:16:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:16:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:16:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:16:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:16:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:16:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:16:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:16:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:16:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:16:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:16:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:16:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:16:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:16:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:16:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:16:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:16:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:16:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:16:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:16:49 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:16:49 np0005604375 podman[246569]: 2026-02-01 15:16:49.283999584 +0000 UTC m=+0.035353855 container create 55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dijkstra, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  1 10:16:49 np0005604375 systemd[1]: Started libpod-conmon-55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb.scope.
Feb  1 10:16:49 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:16:49 np0005604375 podman[246569]: 2026-02-01 15:16:49.357859332 +0000 UTC m=+0.109213623 container init 55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:16:49 np0005604375 podman[246569]: 2026-02-01 15:16:49.266827051 +0000 UTC m=+0.018181322 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:16:49 np0005604375 podman[246569]: 2026-02-01 15:16:49.364651403 +0000 UTC m=+0.116005664 container start 55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dijkstra, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 10:16:49 np0005604375 gallant_dijkstra[246586]: 167 167
Feb  1 10:16:49 np0005604375 systemd[1]: libpod-55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb.scope: Deactivated successfully.
Feb  1 10:16:49 np0005604375 podman[246569]: 2026-02-01 15:16:49.368374648 +0000 UTC m=+0.119728909 container attach 55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dijkstra, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  1 10:16:49 np0005604375 conmon[246586]: conmon 55c0a62456c1b3124634 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb.scope/container/memory.events
Feb  1 10:16:49 np0005604375 podman[246569]: 2026-02-01 15:16:49.37093546 +0000 UTC m=+0.122289711 container died 55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:16:49 np0005604375 systemd[1]: var-lib-containers-storage-overlay-9a7134e4228ea5bd78b6f33f3d4b8e3a3859840f31dcd6770ceda2acdf0541f3-merged.mount: Deactivated successfully.
Feb  1 10:16:49 np0005604375 podman[246569]: 2026-02-01 15:16:49.417534761 +0000 UTC m=+0.168889042 container remove 55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dijkstra, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:16:49 np0005604375 systemd[1]: libpod-conmon-55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb.scope: Deactivated successfully.
Feb  1 10:16:49 np0005604375 podman[246613]: 2026-02-01 15:16:49.575127584 +0000 UTC m=+0.055320987 container create a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  1 10:16:49 np0005604375 systemd[1]: Started libpod-conmon-a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955.scope.
Feb  1 10:16:49 np0005604375 podman[246613]: 2026-02-01 15:16:49.54724167 +0000 UTC m=+0.027435163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:16:49 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:16:49 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b904d8c46b3fbbdd94b0becd8edebb6e9e7881d6171974023ba991898e58079/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:16:49 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b904d8c46b3fbbdd94b0becd8edebb6e9e7881d6171974023ba991898e58079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:16:49 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b904d8c46b3fbbdd94b0becd8edebb6e9e7881d6171974023ba991898e58079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:16:49 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b904d8c46b3fbbdd94b0becd8edebb6e9e7881d6171974023ba991898e58079/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:16:49 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b904d8c46b3fbbdd94b0becd8edebb6e9e7881d6171974023ba991898e58079/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:16:49 np0005604375 podman[246613]: 2026-02-01 15:16:49.677193955 +0000 UTC m=+0.157387398 container init a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:16:49 np0005604375 podman[246613]: 2026-02-01 15:16:49.687248208 +0000 UTC m=+0.167441641 container start a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pike, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  1 10:16:49 np0005604375 podman[246613]: 2026-02-01 15:16:49.691316763 +0000 UTC m=+0.171510316 container attach a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pike, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  1 10:16:50 np0005604375 great_pike[246629]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:16:50 np0005604375 great_pike[246629]: --> All data devices are unavailable
Feb  1 10:16:50 np0005604375 systemd[1]: libpod-a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955.scope: Deactivated successfully.
Feb  1 10:16:50 np0005604375 podman[246613]: 2026-02-01 15:16:50.149898164 +0000 UTC m=+0.630091597 container died a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pike, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  1 10:16:50 np0005604375 systemd[1]: var-lib-containers-storage-overlay-7b904d8c46b3fbbdd94b0becd8edebb6e9e7881d6171974023ba991898e58079-merged.mount: Deactivated successfully.
Feb  1 10:16:50 np0005604375 podman[246613]: 2026-02-01 15:16:50.199216021 +0000 UTC m=+0.679409414 container remove a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:16:50 np0005604375 systemd[1]: libpod-conmon-a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955.scope: Deactivated successfully.
Feb  1 10:16:50 np0005604375 nova_compute[238794]: 2026-02-01 15:16:50.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:16:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 52 KiB/s wr, 7 op/s
Feb  1 10:16:50 np0005604375 podman[246720]: 2026-02-01 15:16:50.665249831 +0000 UTC m=+0.063023093 container create a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dhawan, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Feb  1 10:16:50 np0005604375 systemd[1]: Started libpod-conmon-a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2.scope.
Feb  1 10:16:50 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:16:50 np0005604375 podman[246720]: 2026-02-01 15:16:50.638016225 +0000 UTC m=+0.035789557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:16:50 np0005604375 podman[246720]: 2026-02-01 15:16:50.730538858 +0000 UTC m=+0.128312110 container init a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  1 10:16:50 np0005604375 podman[246720]: 2026-02-01 15:16:50.736032313 +0000 UTC m=+0.133805565 container start a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 10:16:50 np0005604375 unruffled_dhawan[246736]: 167 167
Feb  1 10:16:50 np0005604375 systemd[1]: libpod-a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2.scope: Deactivated successfully.
Feb  1 10:16:50 np0005604375 conmon[246736]: conmon a6258acb2e66db194a83 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2.scope/container/memory.events
Feb  1 10:16:50 np0005604375 podman[246720]: 2026-02-01 15:16:50.739776208 +0000 UTC m=+0.137549540 container attach a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:16:50 np0005604375 podman[246720]: 2026-02-01 15:16:50.740220111 +0000 UTC m=+0.137993383 container died a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dhawan, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  1 10:16:50 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a0370514c6a4f766ca1cd4a4c80148cbcc98e0e259211652655f56fdcfa6292e-merged.mount: Deactivated successfully.
Feb  1 10:16:50 np0005604375 podman[246720]: 2026-02-01 15:16:50.780447352 +0000 UTC m=+0.178220594 container remove a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dhawan, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:16:50 np0005604375 systemd[1]: libpod-conmon-a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2.scope: Deactivated successfully.
Feb  1 10:16:50 np0005604375 podman[246759]: 2026-02-01 15:16:50.922381015 +0000 UTC m=+0.042803325 container create 75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:16:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:16:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/46075784' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:16:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:16:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/46075784' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:16:50 np0005604375 systemd[1]: Started libpod-conmon-75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5.scope.
Feb  1 10:16:50 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:16:50 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e37f57ac3e8a2b194bdcb86e253bb3cc193432ef9ceab82afdb3d7cd318fcd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:16:50 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e37f57ac3e8a2b194bdcb86e253bb3cc193432ef9ceab82afdb3d7cd318fcd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:16:50 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e37f57ac3e8a2b194bdcb86e253bb3cc193432ef9ceab82afdb3d7cd318fcd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:16:50 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e37f57ac3e8a2b194bdcb86e253bb3cc193432ef9ceab82afdb3d7cd318fcd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:16:50 np0005604375 podman[246759]: 2026-02-01 15:16:50.991496709 +0000 UTC m=+0.111919029 container init 75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_murdock, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:16:50 np0005604375 podman[246759]: 2026-02-01 15:16:50.995680437 +0000 UTC m=+0.116102747 container start 75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_murdock, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  1 10:16:50 np0005604375 podman[246759]: 2026-02-01 15:16:50.998823875 +0000 UTC m=+0.119246165 container attach 75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  1 10:16:50 np0005604375 podman[246759]: 2026-02-01 15:16:50.904607935 +0000 UTC m=+0.025030265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:16:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]: {
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:    "0": [
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:        {
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "devices": [
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "/dev/loop3"
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            ],
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_name": "ceph_lv0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_size": "21470642176",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "name": "ceph_lv0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "tags": {
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.cluster_name": "ceph",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.crush_device_class": "",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.encrypted": "0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.objectstore": "bluestore",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.osd_id": "0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.type": "block",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.vdo": "0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.with_tpm": "0"
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            },
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "type": "block",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "vg_name": "ceph_vg0"
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:        }
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:    ],
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:    "1": [
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:        {
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "devices": [
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "/dev/loop4"
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            ],
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_name": "ceph_lv1",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_size": "21470642176",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "name": "ceph_lv1",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "tags": {
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.cluster_name": "ceph",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.crush_device_class": "",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.encrypted": "0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.objectstore": "bluestore",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.osd_id": "1",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.type": "block",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.vdo": "0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.with_tpm": "0"
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            },
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "type": "block",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "vg_name": "ceph_vg1"
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:        }
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:    ],
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:    "2": [
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:        {
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "devices": [
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "/dev/loop5"
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            ],
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_name": "ceph_lv2",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_size": "21470642176",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "name": "ceph_lv2",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "tags": {
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.cluster_name": "ceph",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.crush_device_class": "",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.encrypted": "0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.objectstore": "bluestore",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.osd_id": "2",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.type": "block",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.vdo": "0",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:                "ceph.with_tpm": "0"
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            },
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "type": "block",
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:            "vg_name": "ceph_vg2"
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:        }
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]:    ]
Feb  1 10:16:51 np0005604375 gifted_murdock[246776]: }
Feb  1 10:16:51 np0005604375 systemd[1]: libpod-75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5.scope: Deactivated successfully.
Feb  1 10:16:51 np0005604375 podman[246759]: 2026-02-01 15:16:51.304287229 +0000 UTC m=+0.424709599 container died 75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:16:51 np0005604375 nova_compute[238794]: 2026-02-01 15:16:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:16:51 np0005604375 nova_compute[238794]: 2026-02-01 15:16:51.321 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:16:51 np0005604375 nova_compute[238794]: 2026-02-01 15:16:51.321 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:16:51 np0005604375 nova_compute[238794]: 2026-02-01 15:16:51.321 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:16:51 np0005604375 nova_compute[238794]: 2026-02-01 15:16:51.321 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:16:51 np0005604375 nova_compute[238794]: 2026-02-01 15:16:51.322 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  1 10:16:51 np0005604375 systemd[1]: var-lib-containers-storage-overlay-0e37f57ac3e8a2b194bdcb86e253bb3cc193432ef9ceab82afdb3d7cd318fcd6-merged.mount: Deactivated successfully.
Feb  1 10:16:51 np0005604375 podman[246759]: 2026-02-01 15:16:51.34982928 +0000 UTC m=+0.470251610 container remove 75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_murdock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:16:51 np0005604375 systemd[1]: libpod-conmon-75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5.scope: Deactivated successfully.
Feb  1 10:16:51 np0005604375 podman[246861]: 2026-02-01 15:16:51.780846635 +0000 UTC m=+0.047506997 container create 1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb  1 10:16:51 np0005604375 systemd[1]: Started libpod-conmon-1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6.scope.
Feb  1 10:16:51 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:16:51 np0005604375 podman[246861]: 2026-02-01 15:16:51.840060061 +0000 UTC m=+0.106720473 container init 1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:16:51 np0005604375 podman[246861]: 2026-02-01 15:16:51.846842482 +0000 UTC m=+0.113502804 container start 1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  1 10:16:51 np0005604375 podman[246861]: 2026-02-01 15:16:51.75400336 +0000 UTC m=+0.020663762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:16:51 np0005604375 vigorous_goodall[246877]: 167 167
Feb  1 10:16:51 np0005604375 systemd[1]: libpod-1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6.scope: Deactivated successfully.
Feb  1 10:16:51 np0005604375 podman[246861]: 2026-02-01 15:16:51.849912028 +0000 UTC m=+0.116572390 container attach 1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Feb  1 10:16:51 np0005604375 podman[246861]: 2026-02-01 15:16:51.850250448 +0000 UTC m=+0.116910800 container died 1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:16:51 np0005604375 systemd[1]: var-lib-containers-storage-overlay-5b773edcfc57636807dc6ba765bfdda0af1ba8d1bbdd4e8ce098245ea3038bb1-merged.mount: Deactivated successfully.
Feb  1 10:16:51 np0005604375 podman[246861]: 2026-02-01 15:16:51.889278026 +0000 UTC m=+0.155938388 container remove 1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  1 10:16:51 np0005604375 systemd[1]: libpod-conmon-1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6.scope: Deactivated successfully.
Feb  1 10:16:52 np0005604375 podman[246900]: 2026-02-01 15:16:52.043130354 +0000 UTC m=+0.055526123 container create 069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_antonelli, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  1 10:16:52 np0005604375 systemd[1]: Started libpod-conmon-069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236.scope.
Feb  1 10:16:52 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:16:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74ed4b4a0d8eff59288915d900e59975891c437aeb6eb871eea318ec753333c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:16:52 np0005604375 podman[246900]: 2026-02-01 15:16:52.0209455 +0000 UTC m=+0.033341279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:16:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74ed4b4a0d8eff59288915d900e59975891c437aeb6eb871eea318ec753333c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:16:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74ed4b4a0d8eff59288915d900e59975891c437aeb6eb871eea318ec753333c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:16:52 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74ed4b4a0d8eff59288915d900e59975891c437aeb6eb871eea318ec753333c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:16:52 np0005604375 podman[246900]: 2026-02-01 15:16:52.143669032 +0000 UTC m=+0.156064761 container init 069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_antonelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  1 10:16:52 np0005604375 podman[246900]: 2026-02-01 15:16:52.151280686 +0000 UTC m=+0.163676445 container start 069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_antonelli, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:16:52 np0005604375 podman[246900]: 2026-02-01 15:16:52.154634541 +0000 UTC m=+0.167030290 container attach 069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:16:52 np0005604375 nova_compute[238794]: 2026-02-01 15:16:52.332 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:16:52 np0005604375 nova_compute[238794]: 2026-02-01 15:16:52.334 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  1 10:16:52 np0005604375 nova_compute[238794]: 2026-02-01 15:16:52.349 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  1 10:16:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 64 KiB/s wr, 10 op/s
Feb  1 10:16:52 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:16:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:16:52 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2'.
Feb  1 10:16:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/.meta.tmp'
Feb  1 10:16:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/.meta.tmp' to config b'/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/.meta'
Feb  1 10:16:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:16:52 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "format": "json"}]: dispatch
Feb  1 10:16:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:16:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:16:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:16:52 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:16:52 np0005604375 lvm[246997]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:16:52 np0005604375 lvm[246996]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:16:52 np0005604375 lvm[246996]: VG ceph_vg0 finished
Feb  1 10:16:52 np0005604375 lvm[246997]: VG ceph_vg1 finished
Feb  1 10:16:52 np0005604375 lvm[246999]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:16:52 np0005604375 lvm[246999]: VG ceph_vg2 finished
Feb  1 10:16:52 np0005604375 mystifying_antonelli[246917]: {}
Feb  1 10:16:52 np0005604375 systemd[1]: libpod-069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236.scope: Deactivated successfully.
Feb  1 10:16:52 np0005604375 systemd[1]: libpod-069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236.scope: Consumed 1.050s CPU time.
Feb  1 10:16:52 np0005604375 podman[247002]: 2026-02-01 15:16:52.901025327 +0000 UTC m=+0.018765879 container died 069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:16:52 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a74ed4b4a0d8eff59288915d900e59975891c437aeb6eb871eea318ec753333c-merged.mount: Deactivated successfully.
Feb  1 10:16:52 np0005604375 podman[247002]: 2026-02-01 15:16:52.927856612 +0000 UTC m=+0.045597164 container remove 069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:16:52 np0005604375 systemd[1]: libpod-conmon-069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236.scope: Deactivated successfully.
Feb  1 10:16:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:16:52 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:16:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:16:52 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:16:53 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:16:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:16:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:16:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:16:53 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:16:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:16:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:53 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:16:53 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:16:53 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:16:53 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:16:53 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:16:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:16:53 np0005604375 nova_compute[238794]: 2026-02-01 15:16:53.337 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:16:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 28 KiB/s wr, 5 op/s
Feb  1 10:16:55 np0005604375 nova_compute[238794]: 2026-02-01 15:16:55.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:16:55 np0005604375 nova_compute[238794]: 2026-02-01 15:16:55.344 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:16:55 np0005604375 nova_compute[238794]: 2026-02-01 15:16:55.345 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:16:55 np0005604375 nova_compute[238794]: 2026-02-01 15:16:55.345 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:16:55 np0005604375 nova_compute[238794]: 2026-02-01 15:16:55.345 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:16:55 np0005604375 nova_compute[238794]: 2026-02-01 15:16:55.346 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:16:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:16:55 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890315408' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:16:55 np0005604375 nova_compute[238794]: 2026-02-01 15:16:55.908 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:16:56 np0005604375 nova_compute[238794]: 2026-02-01 15:16:56.052 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:16:56 np0005604375 nova_compute[238794]: 2026-02-01 15:16:56.053 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5080MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:16:56 np0005604375 nova_compute[238794]: 2026-02-01 15:16:56.053 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:16:56 np0005604375 nova_compute[238794]: 2026-02-01 15:16:56.054 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:16:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:16:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 46 KiB/s wr, 7 op/s
Feb  1 10:16:56 np0005604375 nova_compute[238794]: 2026-02-01 15:16:56.424 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:16:56 np0005604375 nova_compute[238794]: 2026-02-01 15:16:56.425 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:16:56 np0005604375 nova_compute[238794]: 2026-02-01 15:16:56.649 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing inventories for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  1 10:16:56 np0005604375 nova_compute[238794]: 2026-02-01 15:16:56.753 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Updating ProviderTree inventory for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  1 10:16:56 np0005604375 nova_compute[238794]: 2026-02-01 15:16:56.753 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Updating inventory in ProviderTree for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  1 10:16:56 np0005604375 nova_compute[238794]: 2026-02-01 15:16:56.775 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing aggregate associations for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  1 10:16:56 np0005604375 nova_compute[238794]: 2026-02-01 15:16:56.807 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing trait associations for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18, traits: COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  1 10:16:56 np0005604375 nova_compute[238794]: 2026-02-01 15:16:56.825 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:16:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:16:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:16:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:16:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:16:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb  1 10:16:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:16:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:16:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:16:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:16:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:16:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:16:56 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:16:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:16:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:16:57 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:16:57 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:16:57 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:16:57 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:16:57 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2075517171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:16:57 np0005604375 nova_compute[238794]: 2026-02-01 15:16:57.367 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:16:57 np0005604375 nova_compute[238794]: 2026-02-01 15:16:57.374 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:16:57 np0005604375 nova_compute[238794]: 2026-02-01 15:16:57.395 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:16:57 np0005604375 nova_compute[238794]: 2026-02-01 15:16:57.399 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:16:57 np0005604375 nova_compute[238794]: 2026-02-01 15:16:57.399 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:16:57 np0005604375 ceph-osd[88066]: bluestore.MempoolThread fragmentation_score=0.000140 took=0.000039s
Feb  1 10:16:57 np0005604375 ceph-osd[87011]: bluestore.MempoolThread fragmentation_score=0.000033 took=0.000034s
Feb  1 10:16:57 np0005604375 ceph-osd[85969]: bluestore.MempoolThread fragmentation_score=0.000137 took=0.000025s
Feb  1 10:16:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 4 op/s
Feb  1 10:16:59 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:16:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb  1 10:16:59 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/43092e54-1971-4f06-9465-62c98a7959e3'.
Feb  1 10:16:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta.tmp'
Feb  1 10:16:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta.tmp' to config b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta'
Feb  1 10:16:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb  1 10:16:59 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "format": "json"}]: dispatch
Feb  1 10:16:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb  1 10:16:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb  1 10:16:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:16:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:16:59 np0005604375 podman[247088]: 2026-02-01 15:16:59.963942289 +0000 UTC m=+0.050805900 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb  1 10:17:00 np0005604375 podman[247089]: 2026-02-01 15:17:00.008502052 +0000 UTC m=+0.095287130 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  1 10:17:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 4 op/s
Feb  1 10:17:00 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb  1 10:17:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:17:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:17:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:17:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:17:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:17:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:17:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 56 KiB/s wr, 6 op/s
Feb  1 10:17:02 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "snap_name": "f022aa77-e100-4ec5-bc9a-94f939ba4cfc", "format": "json"}]: dispatch
Feb  1 10:17:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f022aa77-e100-4ec5-bc9a-94f939ba4cfc, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb  1 10:17:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f022aa77-e100-4ec5-bc9a-94f939ba4cfc, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb  1 10:17:03 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:17:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:03 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c'.
Feb  1 10:17:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/.meta.tmp'
Feb  1 10:17:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/.meta.tmp' to config b'/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/.meta'
Feb  1 10:17:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:03 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "format": "json"}]: dispatch
Feb  1 10:17:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:17:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:17:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s wr, 4 op/s
Feb  1 10:17:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:17:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:17:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:17:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb  1 10:17:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:17:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:17:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:17:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:17:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:17:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:17:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:17:04 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:17:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:17:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:17:06 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "snap_name": "f022aa77-e100-4ec5-bc9a-94f939ba4cfc_05a837c1-3311-42f7-8cdb-24af5bea7bca", "force": true, "format": "json"}]: dispatch
Feb  1 10:17:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f022aa77-e100-4ec5-bc9a-94f939ba4cfc_05a837c1-3311-42f7-8cdb-24af5bea7bca, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb  1 10:17:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta.tmp'
Feb  1 10:17:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta.tmp' to config b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta'
Feb  1 10:17:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f022aa77-e100-4ec5-bc9a-94f939ba4cfc_05a837c1-3311-42f7-8cdb-24af5bea7bca, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb  1 10:17:06 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "snap_name": "f022aa77-e100-4ec5-bc9a-94f939ba4cfc", "force": true, "format": "json"}]: dispatch
Feb  1 10:17:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f022aa77-e100-4ec5-bc9a-94f939ba4cfc, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb  1 10:17:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 82 KiB/s wr, 9 op/s
Feb  1 10:17:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta.tmp'
Feb  1 10:17:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta.tmp' to config b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta'
Feb  1 10:17:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f022aa77-e100-4ec5-bc9a-94f939ba4cfc, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb  1 10:17:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve49", "tenant_id": "557407533ddd4b83a57f3bf0896f77ac", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:17:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, tenant_id:557407533ddd4b83a57f3bf0896f77ac, vol_name:cephfs) < ""
Feb  1 10:17:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Feb  1 10:17:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Feb  1 10:17:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID eve49 with tenant 557407533ddd4b83a57f3bf0896f77ac
Feb  1 10:17:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:17:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, tenant_id:557407533ddd4b83a57f3bf0896f77ac, vol_name:cephfs) < ""
Feb  1 10:17:07 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Feb  1 10:17:07 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:07 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:17:07.812 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:17:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:17:07.813 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:17:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:17:07.813 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:17:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:17:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:17:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:17:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:17:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:17:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 64 KiB/s wr, 7 op/s
Feb  1 10:17:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:17:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Feb  1 10:17:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Feb  1 10:17:10 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Feb  1 10:17:10 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "format": "json"}]: dispatch
Feb  1 10:17:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:10 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:10.062+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'af1fdb5d-a0b1-4be1-a773-3eafab00aae8' of type subvolume
Feb  1 10:17:10 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'af1fdb5d-a0b1-4be1-a773-3eafab00aae8' of type subvolume
Feb  1 10:17:10 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "force": true, "format": "json"}]: dispatch
Feb  1 10:17:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb  1 10:17:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8'' moved to trashcan
Feb  1 10:17:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:17:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb  1 10:17:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 77 KiB/s wr, 8 op/s
Feb  1 10:17:10 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve48", "tenant_id": "557407533ddd4b83a57f3bf0896f77ac", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:17:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, tenant_id:557407533ddd4b83a57f3bf0896f77ac, vol_name:cephfs) < ""
Feb  1 10:17:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Feb  1 10:17:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Feb  1 10:17:10 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID eve48 with tenant 557407533ddd4b83a57f3bf0896f77ac
Feb  1 10:17:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:17:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:10 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:10 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, tenant_id:557407533ddd4b83a57f3bf0896f77ac, vol_name:cephfs) < ""
Feb  1 10:17:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Feb  1 10:17:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:17:11 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:17:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:17:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:17:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb  1 10:17:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:17:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:17:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:11 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:17:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:17:11 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:17:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:17:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:12 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:17:12 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:17:12 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:17:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 53 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 103 KiB/s wr, 12 op/s
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6dbb3e62-b996-4ace-bb16-037502f09dce", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6dbb3e62-b996-4ace-bb16-037502f09dce, vol_name:cephfs) < ""
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/6dbb3e62-b996-4ace-bb16-037502f09dce/2de0a33b-53fe-4bbd-9974-0c024599c273'.
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6dbb3e62-b996-4ace-bb16-037502f09dce/.meta.tmp'
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6dbb3e62-b996-4ace-bb16-037502f09dce/.meta.tmp' to config b'/volumes/_nogroup/6dbb3e62-b996-4ace-bb16-037502f09dce/.meta'
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6dbb3e62-b996-4ace-bb16-037502f09dce, vol_name:cephfs) < ""
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6dbb3e62-b996-4ace-bb16-037502f09dce", "format": "json"}]: dispatch
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6dbb3e62-b996-4ace-bb16-037502f09dce, vol_name:cephfs) < ""
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6dbb3e62-b996-4ace-bb16-037502f09dce, vol_name:cephfs) < ""
Feb  1 10:17:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:17:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 53 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 103 KiB/s wr, 12 op/s
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve48", "format": "json"}]: dispatch
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Feb  1 10:17:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Feb  1 10:17:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0)
Feb  1 10:17:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Feb  1 10:17:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve48", "format": "json"}]: dispatch
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c
Feb  1 10:17:14 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c],prefix=session evict} (starting...)
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:17:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Feb  1 10:17:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Feb  1 10:17:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Feb  1 10:17:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:17:16 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb  1 10:17:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:17:16 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:17:16 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:17:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:17:16 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:16 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 122 KiB/s wr, 14 op/s
Feb  1 10:17:17 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:17:17 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:17 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:17:17
Feb  1 10:17:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:17:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:17:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.mgr', 'vms', 'images', 'cephfs.cephfs.data']
Feb  1 10:17:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6dbb3e62-b996-4ace-bb16-037502f09dce", "format": "json"}]: dispatch
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6dbb3e62-b996-4ace-bb16-037502f09dce, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6dbb3e62-b996-4ace-bb16-037502f09dce, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:18 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:18.050+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6dbb3e62-b996-4ace-bb16-037502f09dce' of type subvolume
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6dbb3e62-b996-4ace-bb16-037502f09dce' of type subvolume
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6dbb3e62-b996-4ace-bb16-037502f09dce", "force": true, "format": "json"}]: dispatch
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6dbb3e62-b996-4ace-bb16-037502f09dce, vol_name:cephfs) < ""
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6dbb3e62-b996-4ace-bb16-037502f09dce'' moved to trashcan
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6dbb3e62-b996-4ace-bb16-037502f09dce, vol_name:cephfs) < ""
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve47", "tenant_id": "557407533ddd4b83a57f3bf0896f77ac", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, tenant_id:557407533ddd4b83a57f3bf0896f77ac, vol_name:cephfs) < ""
Feb  1 10:17:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Feb  1 10:17:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Feb  1 10:17:18 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID eve47 with tenant 557407533ddd4b83a57f3bf0896f77ac
Feb  1 10:17:18 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  1 10:17:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:17:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, tenant_id:557407533ddd4b83a57f3bf0896f77ac, vol_name:cephfs) < ""
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 122 KiB/s wr, 14 op/s
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825b5b8370>)]
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825be50e80>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f82797d1670>)]
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:17:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.viosrg(active, since 27m)
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.367850) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959039367885, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1537, "num_deletes": 252, "total_data_size": 1967574, "memory_usage": 2000336, "flush_reason": "Manual Compaction"}
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959039382987, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1945116, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19942, "largest_seqno": 21478, "table_properties": {"data_size": 1938002, "index_size": 3932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17568, "raw_average_key_size": 21, "raw_value_size": 1922649, "raw_average_value_size": 2305, "num_data_blocks": 177, "num_entries": 834, "num_filter_entries": 834, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958956, "oldest_key_time": 1769958956, "file_creation_time": 1769959039, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 15213 microseconds, and 6255 cpu microseconds.
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.383058) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1945116 bytes OK
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.383081) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.389162) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.389190) EVENT_LOG_v1 {"time_micros": 1769959039389182, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.389213) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1960247, prev total WAL file size 1960247, number of live WAL files 2.
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.389804) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1899KB)], [47(7132KB)]
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959039389846, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9248530, "oldest_snapshot_seqno": -1}
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4745 keys, 7455691 bytes, temperature: kUnknown
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959039435265, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7455691, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7423284, "index_size": 19433, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11909, "raw_key_size": 118005, "raw_average_key_size": 24, "raw_value_size": 7337094, "raw_average_value_size": 1546, "num_data_blocks": 808, "num_entries": 4745, "num_filter_entries": 4745, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769959039, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.435506) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7455691 bytes
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.436933) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.2 rd, 163.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.6) write-amplify(3.8) OK, records in: 5273, records dropped: 528 output_compression: NoCompression
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.436978) EVENT_LOG_v1 {"time_micros": 1769959039436941, "job": 24, "event": "compaction_finished", "compaction_time_micros": 45523, "compaction_time_cpu_micros": 23731, "output_level": 6, "num_output_files": 1, "total_output_size": 7455691, "num_input_records": 5273, "num_output_records": 4745, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959039437239, "job": 24, "event": "table_file_deletion", "file_number": 49}
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959039438089, "job": 24, "event": "table_file_deletion", "file_number": 47}
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.389758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.438327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.438338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.438474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.438477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:17:19 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.438479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:17:19 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:17:19 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:17:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:17:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb  1 10:17:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:17:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:17:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:17:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:17:20 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:17:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:17:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:20 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:17:20 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:17:20 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:17:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 592 B/s rd, 118 KiB/s wr, 14 op/s
Feb  1 10:17:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:17:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Feb  1 10:17:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Feb  1 10:17:21 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/8a0240ed-5f88-4931-965b-b8f7feb2baae'.
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp'
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp' to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta'
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "format": "json"}]: dispatch
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb  1 10:17:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:17:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 135 KiB/s wr, 16 op/s
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve47", "format": "json"}]: dispatch
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Feb  1 10:17:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Feb  1 10:17:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0)
Feb  1 10:17:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Feb  1 10:17:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve47", "format": "json"}]: dispatch
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c
Feb  1 10:17:22 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c],prefix=session evict} (starting...)
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/c9a3a2d8-1885-4fd7-9e5b-aba6a99f983b'.
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta.tmp'
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta.tmp' to config b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta'
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "format": "json"}]: dispatch
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb  1 10:17:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb  1 10:17:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:17:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:17:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Feb  1 10:17:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Feb  1 10:17:23 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Feb  1 10:17:23 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:17:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:17:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:17:23 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:17:23 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:17:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:23 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:24 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:17:24 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:24 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 135 KiB/s wr, 16 op/s
Feb  1 10:17:25 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "snap_name": "c61fb956-cb54-4a69-b984-796f123291a0", "format": "json"}]: dispatch
Feb  1 10:17:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c61fb956-cb54-4a69-b984-796f123291a0, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb  1 10:17:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c61fb956-cb54-4a69-b984-796f123291a0, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb  1 10:17:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:17:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 129 KiB/s wr, 15 op/s
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659614191380082 of space, bias 1.0, pg target 0.19978842574140246 quantized to 32 (current 32)
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00022374933924286552 of space, bias 4.0, pg target 0.2684992070914386 quantized to 16 (current 16)
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "snap_name": "1e96b528-01bb-4d75-b3fa-211a85006c95", "format": "json"}]: dispatch
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1e96b528-01bb-4d75-b3fa-211a85006c95, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1e96b528-01bb-4d75-b3fa-211a85006c95, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 129 KiB/s wr, 15 op/s
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:17:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:17:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb  1 10:17:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:17:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:17:28 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "snap_name": "c61fb956-cb54-4a69-b984-796f123291a0", "target_sub_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb  1 10:17:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:c61fb956-cb54-4a69-b984-796f123291a0, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, target_sub_name:57b6c133-b657-4e29-ab3e-f40863c80360, vol_name:cephfs) < ""
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/b946c66e-6da4-4a91-b4c8-4c95fea0475d'.
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp' to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 04d37bf3-1c0c-4039-ac3f-39a73a48d6b5 for path b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp' to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:c61fb956-cb54-4a69-b984-796f123291a0, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, target_sub_name:57b6c133-b657-4e29-ab3e-f40863c80360, vol_name:cephfs) < ""
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57b6c133-b657-4e29-ab3e-f40863c80360, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:29 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.055+0000 7f826cf8d640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.056+0000 7f826cf8d640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.056+0000 7f826cf8d640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.056+0000 7f826cf8d640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.056+0000 7f826cf8d640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57b6c133-b657-4e29-ab3e-f40863c80360, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 57b6c133-b657-4e29-ab3e-f40863c80360)
Feb  1 10:17:29 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.071+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.071+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.071+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.071+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.071+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 57b6c133-b657-4e29-ab3e-f40863c80360) -- by 0 seconds
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp' to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve49", "format": "json"}]: dispatch
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:17:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:17:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.snap/c61fb956-cb54-4a69-b984-796f123291a0/8a0240ed-5f88-4931-965b-b8f7feb2baae' to b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/b946c66e-6da4-4a91-b4c8-4c95fea0475d'
Feb  1 10:17:29 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Feb  1 10:17:29 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Feb  1 10:17:29 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0)
Feb  1 10:17:29 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Feb  1 10:17:29 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve49", "format": "json"}]: dispatch
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp' to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.clone_index] untracking 04d37bf3-1c0c-4039-ac3f-39a73a48d6b5
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp' to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp' to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta'
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 57b6c133-b657-4e29-ab3e-f40863c80360)
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c
Feb  1 10:17:29 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c],prefix=session evict} (starting...)
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:17:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "format": "json"}]: dispatch
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c9b2fd01-3509-428e-b915-0b74e783dc19, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c9b2fd01-3509-428e-b915-0b74e783dc19, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:30 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:30.028+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c9b2fd01-3509-428e-b915-0b74e783dc19' of type subvolume
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c9b2fd01-3509-428e-b915-0b74e783dc19' of type subvolume
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "force": true, "format": "json"}]: dispatch
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19'' moved to trashcan
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: [progress WARNING root] complete: ev mgr-vol-ongoing-clones does not exist
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f82797d15e0>
Feb  1 10:17:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 129 KiB/s wr, 15 op/s
Feb  1 10:17:30 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Feb  1 10:17:30 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Feb  1 10:17:30 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Feb  1 10:17:31 np0005604375 podman[247165]: 2026-02-01 15:17:31.008797103 +0000 UTC m=+0.092010340 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  1 10:17:31 np0005604375 podman[247166]: 2026-02-01 15:17:31.017034594 +0000 UTC m=+0.102609867 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb  1 10:17:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:17:31 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.viosrg(active, since 27m)
Feb  1 10:17:31 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb  1 10:17:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:17:31 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:17:31 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:17:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:17:31 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:31 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 56 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 547 B/s rd, 118 KiB/s wr, 13 op/s
Feb  1 10:17:32 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:17:32 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:32 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:33 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:17:33.480 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  1 10:17:33 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:17:33.482 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  1 10:17:33 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8c24d660-d99e-4a84-8d8a-dd162ef7a432", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:17:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, vol_name:cephfs) < ""
Feb  1 10:17:33 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/8c24d660-d99e-4a84-8d8a-dd162ef7a432/78726976-b5a8-431b-96ab-e953f68fd3ff'.
Feb  1 10:17:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8c24d660-d99e-4a84-8d8a-dd162ef7a432/.meta.tmp'
Feb  1 10:17:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8c24d660-d99e-4a84-8d8a-dd162ef7a432/.meta.tmp' to config b'/volumes/_nogroup/8c24d660-d99e-4a84-8d8a-dd162ef7a432/.meta'
Feb  1 10:17:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, vol_name:cephfs) < ""
Feb  1 10:17:33 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8c24d660-d99e-4a84-8d8a-dd162ef7a432", "format": "json"}]: dispatch
Feb  1 10:17:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, vol_name:cephfs) < ""
Feb  1 10:17:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, vol_name:cephfs) < ""
Feb  1 10:17:33 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:17:33 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:17:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 56 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 111 KiB/s wr, 12 op/s
Feb  1 10:17:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:17:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:17:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:17:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb  1 10:17:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:17:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:17:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:17:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:17:35 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:17:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:17:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:35 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:17:35 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:17:35 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:17:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:17:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 152 KiB/s wr, 18 op/s
Feb  1 10:17:36 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:17:36.485 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  1 10:17:37 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8c24d660-d99e-4a84-8d8a-dd162ef7a432", "format": "json"}]: dispatch
Feb  1 10:17:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:37 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:37.239+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8c24d660-d99e-4a84-8d8a-dd162ef7a432' of type subvolume
Feb  1 10:17:37 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8c24d660-d99e-4a84-8d8a-dd162ef7a432' of type subvolume
Feb  1 10:17:37 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8c24d660-d99e-4a84-8d8a-dd162ef7a432", "force": true, "format": "json"}]: dispatch
Feb  1 10:17:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, vol_name:cephfs) < ""
Feb  1 10:17:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8c24d660-d99e-4a84-8d8a-dd162ef7a432'' moved to trashcan
Feb  1 10:17:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:17:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, vol_name:cephfs) < ""
Feb  1 10:17:38 np0005604375 nova_compute[238794]: 2026-02-01 15:17:38.108 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:17:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 102 KiB/s wr, 12 op/s
Feb  1 10:17:38 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:17:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:38 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:17:38 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:17:38 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:17:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:17:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:17:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 102 KiB/s wr, 12 op/s
Feb  1 10:17:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:17:41 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "86350af1-da40-441c-befe-cde1cbd30541", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:17:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:86350af1-da40-441c-befe-cde1cbd30541, vol_name:cephfs) < ""
Feb  1 10:17:41 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/86350af1-da40-441c-befe-cde1cbd30541/d292e24c-a6d4-450e-a222-6c2b805383e3'.
Feb  1 10:17:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86350af1-da40-441c-befe-cde1cbd30541/.meta.tmp'
Feb  1 10:17:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86350af1-da40-441c-befe-cde1cbd30541/.meta.tmp' to config b'/volumes/_nogroup/86350af1-da40-441c-befe-cde1cbd30541/.meta'
Feb  1 10:17:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:86350af1-da40-441c-befe-cde1cbd30541, vol_name:cephfs) < ""
Feb  1 10:17:41 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "86350af1-da40-441c-befe-cde1cbd30541", "format": "json"}]: dispatch
Feb  1 10:17:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86350af1-da40-441c-befe-cde1cbd30541, vol_name:cephfs) < ""
Feb  1 10:17:41 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86350af1-da40-441c-befe-cde1cbd30541, vol_name:cephfs) < ""
Feb  1 10:17:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:17:41 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:17:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 139 KiB/s wr, 16 op/s
Feb  1 10:17:42 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:17:42 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:17:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:17:42 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb  1 10:17:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:17:42 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:17:42 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:42 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:17:42 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:42 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:17:42 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:17:42 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:17:42 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:42 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:17:42 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:17:42 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:17:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 78 KiB/s wr, 9 op/s
Feb  1 10:17:45 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "86350af1-da40-441c-befe-cde1cbd30541", "format": "json"}]: dispatch
Feb  1 10:17:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:86350af1-da40-441c-befe-cde1cbd30541, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:86350af1-da40-441c-befe-cde1cbd30541, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:45 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:45.827+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86350af1-da40-441c-befe-cde1cbd30541' of type subvolume
Feb  1 10:17:45 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86350af1-da40-441c-befe-cde1cbd30541' of type subvolume
Feb  1 10:17:45 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "86350af1-da40-441c-befe-cde1cbd30541", "force": true, "format": "json"}]: dispatch
Feb  1 10:17:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86350af1-da40-441c-befe-cde1cbd30541, vol_name:cephfs) < ""
Feb  1 10:17:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/86350af1-da40-441c-befe-cde1cbd30541'' moved to trashcan
Feb  1 10:17:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:17:45 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86350af1-da40-441c-befe-cde1cbd30541, vol_name:cephfs) < ""
Feb  1 10:17:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:17:46 np0005604375 nova_compute[238794]: 2026-02-01 15:17:46.340 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:17:46 np0005604375 nova_compute[238794]: 2026-02-01 15:17:46.340 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:17:46 np0005604375 nova_compute[238794]: 2026-02-01 15:17:46.341 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:17:46 np0005604375 nova_compute[238794]: 2026-02-01 15:17:46.356 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:17:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 104 KiB/s wr, 12 op/s
Feb  1 10:17:46 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb  1 10:17:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:17:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:17:46 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:17:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:17:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:17:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:47 np0005604375 nova_compute[238794]: 2026-02-01 15:17:47.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:17:48 np0005604375 nova_compute[238794]: 2026-02-01 15:17:48.315 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 63 KiB/s wr, 6 op/s
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/861fb7cb-7d04-4083-bc0f-ab5d8a2821b0'.
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta.tmp'
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta.tmp' to config b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta'
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "format": "json"}]: dispatch
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb  1 10:17:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:17:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:17:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:17:49 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "887d0676-527e-47b5-bf80-254c50cf4633", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:17:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:887d0676-527e-47b5-bf80-254c50cf4633, vol_name:cephfs) < ""
Feb  1 10:17:49 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/887d0676-527e-47b5-bf80-254c50cf4633/383e1c99-f6dd-41d8-9eef-e85139cf1415'.
Feb  1 10:17:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/887d0676-527e-47b5-bf80-254c50cf4633/.meta.tmp'
Feb  1 10:17:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/887d0676-527e-47b5-bf80-254c50cf4633/.meta.tmp' to config b'/volumes/_nogroup/887d0676-527e-47b5-bf80-254c50cf4633/.meta'
Feb  1 10:17:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:887d0676-527e-47b5-bf80-254c50cf4633, vol_name:cephfs) < ""
Feb  1 10:17:49 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "887d0676-527e-47b5-bf80-254c50cf4633", "format": "json"}]: dispatch
Feb  1 10:17:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:887d0676-527e-47b5-bf80-254c50cf4633, vol_name:cephfs) < ""
Feb  1 10:17:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:887d0676-527e-47b5-bf80-254c50cf4633, vol_name:cephfs) < ""
Feb  1 10:17:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:17:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:17:49 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:17:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:17:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:17:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb  1 10:17:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:17:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:17:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:50 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:17:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:17:50 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:17:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:17:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:50 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:17:50 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:17:50 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:17:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 63 KiB/s wr, 6 op/s
Feb  1 10:17:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:17:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2079590638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:17:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:17:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2079590638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:17:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:17:51 np0005604375 nova_compute[238794]: 2026-02-01 15:17:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:17:51 np0005604375 nova_compute[238794]: 2026-02-01 15:17:51.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:17:51 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "snap_name": "337552e6-dd85-4f6d-9610-99737469dd80", "format": "json"}]: dispatch
Feb  1 10:17:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:337552e6-dd85-4f6d-9610-99737469dd80, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb  1 10:17:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:337552e6-dd85-4f6d-9610-99737469dd80, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb  1 10:17:52 np0005604375 nova_compute[238794]: 2026-02-01 15:17:52.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:17:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 122 KiB/s wr, 12 op/s
Feb  1 10:17:52 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "887d0676-527e-47b5-bf80-254c50cf4633", "format": "json"}]: dispatch
Feb  1 10:17:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:887d0676-527e-47b5-bf80-254c50cf4633, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:887d0676-527e-47b5-bf80-254c50cf4633, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:52 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:52.749+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '887d0676-527e-47b5-bf80-254c50cf4633' of type subvolume
Feb  1 10:17:52 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '887d0676-527e-47b5-bf80-254c50cf4633' of type subvolume
Feb  1 10:17:52 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "887d0676-527e-47b5-bf80-254c50cf4633", "force": true, "format": "json"}]: dispatch
Feb  1 10:17:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:887d0676-527e-47b5-bf80-254c50cf4633, vol_name:cephfs) < ""
Feb  1 10:17:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/887d0676-527e-47b5-bf80-254c50cf4633'' moved to trashcan
Feb  1 10:17:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:17:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:887d0676-527e-47b5-bf80-254c50cf4633, vol_name:cephfs) < ""
Feb  1 10:17:53 np0005604375 nova_compute[238794]: 2026-02-01 15:17:53.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:17:53 np0005604375 nova_compute[238794]: 2026-02-01 15:17:53.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:17:53 np0005604375 nova_compute[238794]: 2026-02-01 15:17:53.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:17:53 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:17:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:17:53 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:17:53 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:17:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:17:54 np0005604375 podman[247356]: 2026-02-01 15:17:54.075878168 +0000 UTC m=+0.054621574 container create ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 10:17:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:17:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:17:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:17:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:17:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:17:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:17:54 np0005604375 systemd[1]: Started libpod-conmon-ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61.scope.
Feb  1 10:17:54 np0005604375 podman[247356]: 2026-02-01 15:17:54.054365338 +0000 UTC m=+0.033108784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:17:54 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:17:54 np0005604375 podman[247356]: 2026-02-01 15:17:54.169216052 +0000 UTC m=+0.147959498 container init ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_napier, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:17:54 np0005604375 podman[247356]: 2026-02-01 15:17:54.177251606 +0000 UTC m=+0.155995002 container start ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_napier, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:17:54 np0005604375 podman[247356]: 2026-02-01 15:17:54.181330389 +0000 UTC m=+0.160073795 container attach ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  1 10:17:54 np0005604375 crazy_napier[247372]: 167 167
Feb  1 10:17:54 np0005604375 systemd[1]: libpod-ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61.scope: Deactivated successfully.
Feb  1 10:17:54 np0005604375 podman[247356]: 2026-02-01 15:17:54.184913449 +0000 UTC m=+0.163656845 container died ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  1 10:17:54 np0005604375 systemd[1]: var-lib-containers-storage-overlay-c502389dab7046a76e818727c0b0124ec511c79bacf3f14b7a3bf9a4b264a4e9-merged.mount: Deactivated successfully.
Feb  1 10:17:54 np0005604375 podman[247356]: 2026-02-01 15:17:54.235796258 +0000 UTC m=+0.214539664 container remove ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_napier, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:17:54 np0005604375 systemd[1]: libpod-conmon-ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61.scope: Deactivated successfully.
Feb  1 10:17:54 np0005604375 podman[247396]: 2026-02-01 15:17:54.406482969 +0000 UTC m=+0.054108930 container create cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:17:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 KiB/s wr, 9 op/s
Feb  1 10:17:54 np0005604375 systemd[1]: Started libpod-conmon-cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5.scope.
Feb  1 10:17:54 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:17:54 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70fc77f78663fef64f40cfa5b11c49e85713280bcd96a4391bd7f384c8469d51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:17:54 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70fc77f78663fef64f40cfa5b11c49e85713280bcd96a4391bd7f384c8469d51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:17:54 np0005604375 podman[247396]: 2026-02-01 15:17:54.381366758 +0000 UTC m=+0.028992809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:17:54 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70fc77f78663fef64f40cfa5b11c49e85713280bcd96a4391bd7f384c8469d51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:17:54 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70fc77f78663fef64f40cfa5b11c49e85713280bcd96a4391bd7f384c8469d51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:17:54 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70fc77f78663fef64f40cfa5b11c49e85713280bcd96a4391bd7f384c8469d51/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:17:54 np0005604375 podman[247396]: 2026-02-01 15:17:54.495037849 +0000 UTC m=+0.142663890 container init cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hellman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  1 10:17:54 np0005604375 podman[247396]: 2026-02-01 15:17:54.501813268 +0000 UTC m=+0.149439229 container start cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:17:54 np0005604375 podman[247396]: 2026-02-01 15:17:54.505504481 +0000 UTC m=+0.153130442 container attach cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:17:54 np0005604375 brave_hellman[247412]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:17:54 np0005604375 brave_hellman[247412]: --> All data devices are unavailable
Feb  1 10:17:54 np0005604375 systemd[1]: libpod-cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5.scope: Deactivated successfully.
Feb  1 10:17:54 np0005604375 podman[247396]: 2026-02-01 15:17:54.956671884 +0000 UTC m=+0.604297875 container died cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hellman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  1 10:17:54 np0005604375 systemd[1]: var-lib-containers-storage-overlay-70fc77f78663fef64f40cfa5b11c49e85713280bcd96a4391bd7f384c8469d51-merged.mount: Deactivated successfully.
Feb  1 10:17:54 np0005604375 podman[247396]: 2026-02-01 15:17:54.996442503 +0000 UTC m=+0.644068464 container remove cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  1 10:17:55 np0005604375 systemd[1]: libpod-conmon-cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5.scope: Deactivated successfully.
Feb  1 10:17:55 np0005604375 nova_compute[238794]: 2026-02-01 15:17:55.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:17:55 np0005604375 nova_compute[238794]: 2026-02-01 15:17:55.341 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:17:55 np0005604375 nova_compute[238794]: 2026-02-01 15:17:55.341 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:17:55 np0005604375 nova_compute[238794]: 2026-02-01 15:17:55.342 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:17:55 np0005604375 nova_compute[238794]: 2026-02-01 15:17:55.342 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:17:55 np0005604375 nova_compute[238794]: 2026-02-01 15:17:55.342 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:17:55 np0005604375 podman[247505]: 2026-02-01 15:17:55.385562425 +0000 UTC m=+0.044642336 container create 9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  1 10:17:55 np0005604375 systemd[1]: Started libpod-conmon-9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b.scope.
Feb  1 10:17:55 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:17:55 np0005604375 podman[247505]: 2026-02-01 15:17:55.453902542 +0000 UTC m=+0.112982463 container init 9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  1 10:17:55 np0005604375 podman[247505]: 2026-02-01 15:17:55.459827177 +0000 UTC m=+0.118907088 container start 9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:17:55 np0005604375 podman[247505]: 2026-02-01 15:17:55.462375638 +0000 UTC m=+0.121455549 container attach 9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:17:55 np0005604375 competent_bell[247522]: 167 167
Feb  1 10:17:55 np0005604375 systemd[1]: libpod-9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b.scope: Deactivated successfully.
Feb  1 10:17:55 np0005604375 podman[247505]: 2026-02-01 15:17:55.36814408 +0000 UTC m=+0.027224031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:17:55 np0005604375 podman[247505]: 2026-02-01 15:17:55.463845899 +0000 UTC m=+0.122925810 container died 9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  1 10:17:55 np0005604375 systemd[1]: var-lib-containers-storage-overlay-9d03cf9ead78f7c7444d260a5ba547315bf77c150357050192d9aadada30f3b3-merged.mount: Deactivated successfully.
Feb  1 10:17:55 np0005604375 podman[247505]: 2026-02-01 15:17:55.500945504 +0000 UTC m=+0.160025415 container remove 9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:17:55 np0005604375 systemd[1]: libpod-conmon-9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b.scope: Deactivated successfully.
Feb  1 10:17:55 np0005604375 podman[247565]: 2026-02-01 15:17:55.660807552 +0000 UTC m=+0.041620302 container create d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wozniak, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  1 10:17:55 np0005604375 systemd[1]: Started libpod-conmon-d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63.scope.
Feb  1 10:17:55 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:17:55 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70f3dfeb0e7a93a46924e302d1c4267ea58126fa1bc43c5bfbd1ee9706dc104/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:17:55 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70f3dfeb0e7a93a46924e302d1c4267ea58126fa1bc43c5bfbd1ee9706dc104/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:17:55 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70f3dfeb0e7a93a46924e302d1c4267ea58126fa1bc43c5bfbd1ee9706dc104/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:17:55 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70f3dfeb0e7a93a46924e302d1c4267ea58126fa1bc43c5bfbd1ee9706dc104/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:17:55 np0005604375 podman[247565]: 2026-02-01 15:17:55.642741838 +0000 UTC m=+0.023554608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:17:55 np0005604375 podman[247565]: 2026-02-01 15:17:55.747993594 +0000 UTC m=+0.128806404 container init d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb  1 10:17:55 np0005604375 podman[247565]: 2026-02-01 15:17:55.753796446 +0000 UTC m=+0.134609186 container start d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wozniak, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  1 10:17:55 np0005604375 podman[247565]: 2026-02-01 15:17:55.75682582 +0000 UTC m=+0.137638660 container attach d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wozniak, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:17:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:17:55 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1670416892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:17:55 np0005604375 nova_compute[238794]: 2026-02-01 15:17:55.822 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:17:55 np0005604375 nova_compute[238794]: 2026-02-01 15:17:55.963 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:17:55 np0005604375 nova_compute[238794]: 2026-02-01 15:17:55.965 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5029MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:17:55 np0005604375 nova_compute[238794]: 2026-02-01 15:17:55.965 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:17:55 np0005604375 nova_compute[238794]: 2026-02-01 15:17:55.966 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:17:56 np0005604375 nova_compute[238794]: 2026-02-01 15:17:56.039 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:17:56 np0005604375 nova_compute[238794]: 2026-02-01 15:17:56.039 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]: {
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:    "0": [
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:        {
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "devices": [
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "/dev/loop3"
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            ],
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_name": "ceph_lv0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_size": "21470642176",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "name": "ceph_lv0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "tags": {
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.cluster_name": "ceph",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.crush_device_class": "",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.encrypted": "0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.objectstore": "bluestore",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.osd_id": "0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.type": "block",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.vdo": "0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.with_tpm": "0"
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            },
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "type": "block",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "vg_name": "ceph_vg0"
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:        }
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:    ],
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:    "1": [
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:        {
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "devices": [
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "/dev/loop4"
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            ],
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_name": "ceph_lv1",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_size": "21470642176",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "name": "ceph_lv1",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "tags": {
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.cluster_name": "ceph",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.crush_device_class": "",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.encrypted": "0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.objectstore": "bluestore",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.osd_id": "1",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.type": "block",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.vdo": "0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.with_tpm": "0"
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            },
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "type": "block",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "vg_name": "ceph_vg1"
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:        }
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:    ],
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:    "2": [
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:        {
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "devices": [
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "/dev/loop5"
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            ],
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_name": "ceph_lv2",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_size": "21470642176",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "name": "ceph_lv2",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "tags": {
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.cluster_name": "ceph",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.crush_device_class": "",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.encrypted": "0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.objectstore": "bluestore",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.osd_id": "2",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.type": "block",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.vdo": "0",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:                "ceph.with_tpm": "0"
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            },
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "type": "block",
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:            "vg_name": "ceph_vg2"
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:        }
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]:    ]
Feb  1 10:17:56 np0005604375 cool_wozniak[247582]: }
Feb  1 10:17:56 np0005604375 nova_compute[238794]: 2026-02-01 15:17:56.058 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:17:56 np0005604375 systemd[1]: libpod-d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63.scope: Deactivated successfully.
Feb  1 10:17:56 np0005604375 podman[247565]: 2026-02-01 15:17:56.071082295 +0000 UTC m=+0.451895115 container died d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:17:56 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a70f3dfeb0e7a93a46924e302d1c4267ea58126fa1bc43c5bfbd1ee9706dc104-merged.mount: Deactivated successfully.
Feb  1 10:17:56 np0005604375 podman[247565]: 2026-02-01 15:17:56.110025751 +0000 UTC m=+0.490838531 container remove d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wozniak, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  1 10:17:56 np0005604375 systemd[1]: libpod-conmon-d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63.scope: Deactivated successfully.
Feb  1 10:17:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "92466679-2a01-470b-96b5-c6d88c0b6509", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:92466679-2a01-470b-96b5-c6d88c0b6509, vol_name:cephfs) < ""
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/92466679-2a01-470b-96b5-c6d88c0b6509/1fbe2a21-7c37-459f-8e2c-6b17c0091c4a'.
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/92466679-2a01-470b-96b5-c6d88c0b6509/.meta.tmp'
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/92466679-2a01-470b-96b5-c6d88c0b6509/.meta.tmp' to config b'/volumes/_nogroup/92466679-2a01-470b-96b5-c6d88c0b6509/.meta'
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:92466679-2a01-470b-96b5-c6d88c0b6509, vol_name:cephfs) < ""
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "92466679-2a01-470b-96b5-c6d88c0b6509", "format": "json"}]: dispatch
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:92466679-2a01-470b-96b5-c6d88c0b6509, vol_name:cephfs) < ""
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:92466679-2a01-470b-96b5-c6d88c0b6509, vol_name:cephfs) < ""
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "snap_name": "337552e6-dd85-4f6d-9610-99737469dd80_028bb641-87db-46c7-9018-3f8d054e8e72", "force": true, "format": "json"}]: dispatch
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:337552e6-dd85-4f6d-9610-99737469dd80_028bb641-87db-46c7-9018-3f8d054e8e72, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta.tmp'
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta.tmp' to config b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta'
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:337552e6-dd85-4f6d-9610-99737469dd80_028bb641-87db-46c7-9018-3f8d054e8e72, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb  1 10:17:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:17:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "snap_name": "337552e6-dd85-4f6d-9610-99737469dd80", "force": true, "format": "json"}]: dispatch
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:337552e6-dd85-4f6d-9610-99737469dd80, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta.tmp'
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta.tmp' to config b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta'
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:337552e6-dd85-4f6d-9610-99737469dd80, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 58 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 121 KiB/s wr, 13 op/s
Feb  1 10:17:56 np0005604375 podman[247689]: 2026-02-01 15:17:56.506428597 +0000 UTC m=+0.054188943 container create f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:17:56 np0005604375 systemd[1]: Started libpod-conmon-f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed.scope.
Feb  1 10:17:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:17:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3363305773' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:17:56 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:17:56 np0005604375 nova_compute[238794]: 2026-02-01 15:17:56.571 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:17:56 np0005604375 podman[247689]: 2026-02-01 15:17:56.480990247 +0000 UTC m=+0.028750643 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:17:56 np0005604375 nova_compute[238794]: 2026-02-01 15:17:56.581 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:17:56 np0005604375 podman[247689]: 2026-02-01 15:17:56.58397934 +0000 UTC m=+0.131739746 container init f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  1 10:17:56 np0005604375 podman[247689]: 2026-02-01 15:17:56.588005752 +0000 UTC m=+0.135766088 container start f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 10:17:56 np0005604375 podman[247689]: 2026-02-01 15:17:56.59115983 +0000 UTC m=+0.138920176 container attach f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:17:56 np0005604375 gallant_elion[247705]: 167 167
Feb  1 10:17:56 np0005604375 systemd[1]: libpod-f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed.scope: Deactivated successfully.
Feb  1 10:17:56 np0005604375 podman[247689]: 2026-02-01 15:17:56.592222039 +0000 UTC m=+0.139982375 container died f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb  1 10:17:56 np0005604375 nova_compute[238794]: 2026-02-01 15:17:56.603 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:17:56 np0005604375 nova_compute[238794]: 2026-02-01 15:17:56.605 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:17:56 np0005604375 nova_compute[238794]: 2026-02-01 15:17:56.606 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:17:56 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f3d133afa6e01b60f260c1df3e23517bff893e3fcb0ea06d5ae88f0e4dc84861-merged.mount: Deactivated successfully.
Feb  1 10:17:56 np0005604375 podman[247689]: 2026-02-01 15:17:56.63026059 +0000 UTC m=+0.178020936 container remove f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  1 10:17:56 np0005604375 systemd[1]: libpod-conmon-f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed.scope: Deactivated successfully.
Feb  1 10:17:56 np0005604375 podman[247731]: 2026-02-01 15:17:56.77546758 +0000 UTC m=+0.045697295 container create 1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:17:56 np0005604375 systemd[1]: Started libpod-conmon-1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5.scope.
Feb  1 10:17:56 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:17:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d47d271aae0b0db03358aef2b3e7be2e169e1da5f5989cd1e56d5889c4f844c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d47d271aae0b0db03358aef2b3e7be2e169e1da5f5989cd1e56d5889c4f844c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:17:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d47d271aae0b0db03358aef2b3e7be2e169e1da5f5989cd1e56d5889c4f844c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:17:56 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d47d271aae0b0db03358aef2b3e7be2e169e1da5f5989cd1e56d5889c4f844c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:17:56 np0005604375 podman[247731]: 2026-02-01 15:17:56.758493217 +0000 UTC m=+0.028722892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:17:56 np0005604375 podman[247731]: 2026-02-01 15:17:56.879704857 +0000 UTC m=+0.149934562 container init 1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_nightingale, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  1 10:17:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:17:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:17:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb  1 10:17:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:17:56 np0005604375 podman[247731]: 2026-02-01 15:17:56.892677459 +0000 UTC m=+0.162907154 container start 1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:17:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:17:56 np0005604375 podman[247731]: 2026-02-01 15:17:56.896534767 +0000 UTC m=+0.166764482 container attach 1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:17:56 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:17:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:17:57 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:17:57 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:17:57 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:17:57 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb  1 10:17:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57b6c133-b657-4e29-ab3e-f40863c80360, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:57 np0005604375 lvm[247826]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:17:57 np0005604375 lvm[247826]: VG ceph_vg0 finished
Feb  1 10:17:57 np0005604375 lvm[247829]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:17:57 np0005604375 lvm[247829]: VG ceph_vg1 finished
Feb  1 10:17:57 np0005604375 lvm[247831]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:17:57 np0005604375 lvm[247831]: VG ceph_vg2 finished
Feb  1 10:17:57 np0005604375 practical_nightingale[247747]: {}
Feb  1 10:17:57 np0005604375 systemd[1]: libpod-1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5.scope: Deactivated successfully.
Feb  1 10:17:57 np0005604375 systemd[1]: libpod-1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5.scope: Consumed 1.066s CPU time.
Feb  1 10:17:57 np0005604375 podman[247731]: 2026-02-01 15:17:57.615625491 +0000 UTC m=+0.885855156 container died 1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_nightingale, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:17:57 np0005604375 systemd[1]: var-lib-containers-storage-overlay-9d47d271aae0b0db03358aef2b3e7be2e169e1da5f5989cd1e56d5889c4f844c-merged.mount: Deactivated successfully.
Feb  1 10:17:57 np0005604375 podman[247731]: 2026-02-01 15:17:57.659741712 +0000 UTC m=+0.929971377 container remove 1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  1 10:17:57 np0005604375 systemd[1]: libpod-conmon-1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5.scope: Deactivated successfully.
Feb  1 10:17:57 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:17:57 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:17:57 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:17:57 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:17:58 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:17:58 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:17:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 58 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 94 KiB/s wr, 10 op/s
Feb  1 10:17:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57b6c133-b657-4e29-ab3e-f40863c80360, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:59 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb  1 10:17:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57b6c133-b657-4e29-ab3e-f40863c80360, vol_name:cephfs) < ""
Feb  1 10:17:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57b6c133-b657-4e29-ab3e-f40863c80360, vol_name:cephfs) < ""
Feb  1 10:17:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:17:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:17:59 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "format": "json"}]: dispatch
Feb  1 10:17:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:17:59 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff89896c-730f-4d0f-b5d3-5b63ed6c492d' of type subvolume
Feb  1 10:17:59 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:59.953+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff89896c-730f-4d0f-b5d3-5b63ed6c492d' of type subvolume
Feb  1 10:17:59 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "force": true, "format": "json"}]: dispatch
Feb  1 10:17:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb  1 10:17:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d'' moved to trashcan
Feb  1 10:17:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:17:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "92466679-2a01-470b-96b5-c6d88c0b6509", "format": "json"}]: dispatch
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:92466679-2a01-470b-96b5-c6d88c0b6509, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:92466679-2a01-470b-96b5-c6d88c0b6509, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '92466679-2a01-470b-96b5-c6d88c0b6509' of type subvolume
Feb  1 10:18:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:00.095+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '92466679-2a01-470b-96b5-c6d88c0b6509' of type subvolume
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "92466679-2a01-470b-96b5-c6d88c0b6509", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:92466679-2a01-470b-96b5-c6d88c0b6509, vol_name:cephfs) < ""
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/92466679-2a01-470b-96b5-c6d88c0b6509'' moved to trashcan
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:92466679-2a01-470b-96b5-c6d88c0b6509, vol_name:cephfs) < ""
Feb  1 10:18:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Feb  1 10:18:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Feb  1 10:18:00 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:18:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:18:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:18:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:18:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 58 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 113 KiB/s wr, 12 op/s
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57b6c133-b657-4e29-ab3e-f40863c80360, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57b6c133-b657-4e29-ab3e-f40863c80360, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57b6c133-b657-4e29-ab3e-f40863c80360, vol_name:cephfs) < ""
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360'' moved to trashcan
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:18:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57b6c133-b657-4e29-ab3e-f40863c80360, vol_name:cephfs) < ""
Feb  1 10:18:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:18:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:01 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:18:02 np0005604375 podman[247871]: 2026-02-01 15:18:02.002976302 +0000 UTC m=+0.089531328 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  1 10:18:02 np0005604375 podman[247872]: 2026-02-01 15:18:02.024245166 +0000 UTC m=+0.114713511 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb  1 10:18:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 58 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 125 KiB/s wr, 12 op/s
Feb  1 10:18:03 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f71d70ca-3bed-407e-bd13-18c8cbf0995f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:18:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, vol_name:cephfs) < ""
Feb  1 10:18:03 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f71d70ca-3bed-407e-bd13-18c8cbf0995f/4a423dca-0a02-4c3b-a2ec-997402614fd5'.
Feb  1 10:18:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f71d70ca-3bed-407e-bd13-18c8cbf0995f/.meta.tmp'
Feb  1 10:18:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f71d70ca-3bed-407e-bd13-18c8cbf0995f/.meta.tmp' to config b'/volumes/_nogroup/f71d70ca-3bed-407e-bd13-18c8cbf0995f/.meta'
Feb  1 10:18:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, vol_name:cephfs) < ""
Feb  1 10:18:03 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f71d70ca-3bed-407e-bd13-18c8cbf0995f", "format": "json"}]: dispatch
Feb  1 10:18:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, vol_name:cephfs) < ""
Feb  1 10:18:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, vol_name:cephfs) < ""
Feb  1 10:18:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:18:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:18:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:18:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb  1 10:18:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:18:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:18:04 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "snap_name": "c61fb956-cb54-4a69-b984-796f123291a0_83c92ce7-3e64-4538-8f22-ddff58a7c70b", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c61fb956-cb54-4a69-b984-796f123291a0_83c92ce7-3e64-4538-8f22-ddff58a7c70b, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp'
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp' to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta'
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c61fb956-cb54-4a69-b984-796f123291a0_83c92ce7-3e64-4538-8f22-ddff58a7c70b, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "snap_name": "c61fb956-cb54-4a69-b984-796f123291a0", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c61fb956-cb54-4a69-b984-796f123291a0, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp'
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp' to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta'
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c61fb956-cb54-4a69-b984-796f123291a0, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb  1 10:18:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 58 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 125 KiB/s wr, 12 op/s
Feb  1 10:18:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:18:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:18:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:18:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:18:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Feb  1 10:18:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Feb  1 10:18:06 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Feb  1 10:18:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 59 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 158 KiB/s wr, 17 op/s
Feb  1 10:18:06 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb  1 10:18:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Feb  1 10:18:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Feb  1 10:18:07 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:18:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:18:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:18:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:18:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:18:07.813 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:18:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:18:07.814 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:18:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:18:07.814 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f71d70ca-3bed-407e-bd13-18c8cbf0995f", "format": "json"}]: dispatch
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:07.855+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f71d70ca-3bed-407e-bd13-18c8cbf0995f' of type subvolume
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f71d70ca-3bed-407e-bd13-18c8cbf0995f' of type subvolume
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f71d70ca-3bed-407e-bd13-18c8cbf0995f", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, vol_name:cephfs) < ""
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f71d70ca-3bed-407e-bd13-18c8cbf0995f'' moved to trashcan
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, vol_name:cephfs) < ""
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "format": "json"}]: dispatch
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:07.905+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eaae1ab0-0f33-4607-9838-62c2bdc360fb' of type subvolume
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eaae1ab0-0f33-4607-9838-62c2bdc360fb' of type subvolume
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb'' moved to trashcan
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:18:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb  1 10:18:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:18:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 59 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 158 KiB/s wr, 17 op/s
Feb  1 10:18:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 59 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 56 KiB/s wr, 7 op/s
Feb  1 10:18:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:18:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:18:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb  1 10:18:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:18:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:18:11 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:18:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:18:11 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "snap_name": "1e96b528-01bb-4d75-b3fa-211a85006c95_b6d4e46b-8d52-41c3-ae82-52a9e57131ed", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1e96b528-01bb-4d75-b3fa-211a85006c95_b6d4e46b-8d52-41c3-ae82-52a9e57131ed, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta.tmp'
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta.tmp' to config b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta'
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1e96b528-01bb-4d75-b3fa-211a85006c95_b6d4e46b-8d52-41c3-ae82-52a9e57131ed, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "snap_name": "1e96b528-01bb-4d75-b3fa-211a85006c95", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1e96b528-01bb-4d75-b3fa-211a85006c95, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta.tmp'
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta.tmp' to config b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta'
Feb  1 10:18:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1e96b528-01bb-4d75-b3fa-211a85006c95, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb  1 10:18:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 114 KiB/s wr, 14 op/s
Feb  1 10:18:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 746 B/s rd, 57 KiB/s wr, 6 op/s
Feb  1 10:18:14 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb  1 10:18:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:18:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:18:14 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:18:14 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:18:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:14 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:14 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:15 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "format": "json"}]: dispatch
Feb  1 10:18:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:15 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:15.336+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb' of type subvolume
Feb  1 10:18:15 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb' of type subvolume
Feb  1 10:18:15 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb  1 10:18:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb'' moved to trashcan
Feb  1 10:18:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:18:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb  1 10:18:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Feb  1 10:18:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Feb  1 10:18:15 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Feb  1 10:18:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:18:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:18:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Feb  1 10:18:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Feb  1 10:18:16 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Feb  1 10:18:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 136 KiB/s wr, 15 op/s
Feb  1 10:18:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:18:17
Feb  1 10:18:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:18:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:18:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'vms', 'default.rgw.log', 'backups', 'images', 'volumes']
Feb  1 10:18:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:18:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:18:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb  1 10:18:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:18:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 136 KiB/s wr, 15 op/s
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:18:18 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:18:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:18:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:18:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:18:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:18:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:18:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb  1 10:18:20 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/7d6b8a93-2239-49a1-a970-ce3d1b5be304'.
Feb  1 10:18:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp'
Feb  1 10:18:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp' to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta'
Feb  1 10:18:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb  1 10:18:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "format": "json"}]: dispatch
Feb  1 10:18:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb  1 10:18:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb  1 10:18:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:18:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:18:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 78 KiB/s wr, 7 op/s
Feb  1 10:18:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:18:21 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:18:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:18:21 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:18:21 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:18:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:18:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 123 KiB/s wr, 13 op/s
Feb  1 10:18:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:18:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:23 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "snap_name": "383b4f57-c12d-4143-bc64-f94b56aa4406", "format": "json"}]: dispatch
Feb  1 10:18:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb  1 10:18:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 458 B/s rd, 110 KiB/s wr, 11 op/s
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "snap_name": "383b4f57-c12d-4143-bc64-f94b56aa4406", "target_sub_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, target_sub_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, vol_name:cephfs) < ""
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/461aa132-07e7-4d84-b5b6-931252a109cb'.
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp'
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp' to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta'
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 147d3459-ad16-48b8-8783-219c36fdf6db for path b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e'
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp'
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp' to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta'
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, target_sub_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, vol_name:cephfs) < ""
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, e91ca10f-a5ab-4efe-a6b7-448ed904538e)
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, e91ca10f-a5ab-4efe-a6b7-448ed904538e) -- by 0 seconds
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp'
Feb  1 10:18:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp' to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta'
Feb  1 10:18:25 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:25.550+0000 7f824346c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:18:25 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:25.550+0000 7f824346c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:18:25 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:25.550+0000 7f824346c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:18:25 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:25.550+0000 7f824346c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:18:25 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:25.550+0000 7f824346c640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.snap/383b4f57-c12d-4143-bc64-f94b56aa4406/7d6b8a93-2239-49a1-a970-ce3d1b5be304' to b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/461aa132-07e7-4d84-b5b6-931252a109cb'
Feb  1 10:18:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:18:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:18:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb  1 10:18:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:18:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp'
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp' to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta'
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.clone_index] untracking 147d3459-ad16-48b8-8783-219c36fdf6db
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp'
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp' to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta'
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp'
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp' to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta'
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, e91ca10f-a5ab-4efe-a6b7-448ed904538e)
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:18:25 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:18:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:18:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:18:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:18:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:18:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Feb  1 10:18:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Feb  1 10:18:26 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 920 B/s rd, 122 KiB/s wr, 12 op/s
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/b7cb2b49-e944-42cc-9aea-91bc5616fa3a'.
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta.tmp'
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta.tmp' to config b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta'
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "format": "json"}]: dispatch
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb  1 10:18:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:18:26 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Feb  1 10:18:26 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f82797d15e0>
Feb  1 10:18:27 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.viosrg(active, since 28m)
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659702529057695 of space, bias 1.0, pg target 0.19979107587173084 quantized to 32 (current 32)
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00032473327732104813 of space, bias 4.0, pg target 0.38967993278525775 quantized to 16 (current 16)
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 122 KiB/s wr, 12 op/s
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [progress INFO root] Writing back 19 completed events
Feb  1 10:18:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  1 10:18:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb  1 10:18:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:18:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:18:28 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:18:29 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:18:29 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:29 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:18:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:18:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:29 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "snap_name": "b86e68d4-3845-4b37-bc61-babe728af73e", "format": "json"}]: dispatch
Feb  1 10:18:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b86e68d4-3845-4b37-bc61-babe728af73e, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb  1 10:18:29 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b86e68d4-3845-4b37-bc61-babe728af73e, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb  1 10:18:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 122 KiB/s wr, 12 op/s
Feb  1 10:18:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:18:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 61 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 131 KiB/s wr, 14 op/s
Feb  1 10:18:32 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:18:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:32 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:18:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:18:32 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb  1 10:18:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:18:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:18:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:32 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:18:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:18:32 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:18:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:18:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:32 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:18:32 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:18:32 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:18:32 np0005604375 podman[247931]: 2026-02-01 15:18:32.975165014 +0000 UTC m=+0.062106293 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb  1 10:18:32 np0005604375 podman[247932]: 2026-02-01 15:18:32.994932206 +0000 UTC m=+0.085655000 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Feb  1 10:18:33 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:18:33.620 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  1 10:18:33 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:18:33.623 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  1 10:18:33 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "snap_name": "b86e68d4-3845-4b37-bc61-babe728af73e_7fcd119c-0e43-4007-8ec8-3d4fbb59c309", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b86e68d4-3845-4b37-bc61-babe728af73e_7fcd119c-0e43-4007-8ec8-3d4fbb59c309, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb  1 10:18:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta.tmp'
Feb  1 10:18:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta.tmp' to config b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta'
Feb  1 10:18:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b86e68d4-3845-4b37-bc61-babe728af73e_7fcd119c-0e43-4007-8ec8-3d4fbb59c309, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb  1 10:18:33 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "snap_name": "b86e68d4-3845-4b37-bc61-babe728af73e", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b86e68d4-3845-4b37-bc61-babe728af73e, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb  1 10:18:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta.tmp'
Feb  1 10:18:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta.tmp' to config b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta'
Feb  1 10:18:34 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b86e68d4-3845-4b37-bc61-babe728af73e, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb  1 10:18:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 61 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 131 KiB/s wr, 14 op/s
Feb  1 10:18:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:18:36 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:18:36 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:18:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:18:36 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:18:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 700 B/s rd, 82 KiB/s wr, 9 op/s
Feb  1 10:18:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:18:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:36 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:37 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "format": "json"}]: dispatch
Feb  1 10:18:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:37 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cc1a1612-4970-46ec-aefe-db2d1c0f8688' of type subvolume
Feb  1 10:18:37 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:37.323+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cc1a1612-4970-46ec-aefe-db2d1c0f8688' of type subvolume
Feb  1 10:18:37 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb  1 10:18:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688'' moved to trashcan
Feb  1 10:18:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:18:37 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb  1 10:18:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:18:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:37 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 70 KiB/s wr, 8 op/s
Feb  1 10:18:38 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:18:38.626 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  1 10:18:39 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c38a331c-6d1f-4342-961a-602e5b4f62e5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:18:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, vol_name:cephfs) < ""
Feb  1 10:18:39 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c38a331c-6d1f-4342-961a-602e5b4f62e5/839cb248-0acc-449c-9f35-9972fc8e8c70'.
Feb  1 10:18:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c38a331c-6d1f-4342-961a-602e5b4f62e5/.meta.tmp'
Feb  1 10:18:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c38a331c-6d1f-4342-961a-602e5b4f62e5/.meta.tmp' to config b'/volumes/_nogroup/c38a331c-6d1f-4342-961a-602e5b4f62e5/.meta'
Feb  1 10:18:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, vol_name:cephfs) < ""
Feb  1 10:18:39 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c38a331c-6d1f-4342-961a-602e5b4f62e5", "format": "json"}]: dispatch
Feb  1 10:18:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, vol_name:cephfs) < ""
Feb  1 10:18:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, vol_name:cephfs) < ""
Feb  1 10:18:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:18:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:18:39 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:18:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:18:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:18:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb  1 10:18:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:18:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:18:40 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Feb  1 10:18:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Feb  1 10:18:40 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Feb  1 10:18:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:18:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:18:40 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 83 KiB/s wr, 9 op/s
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/e581cce0-6e5d-4f0c-9f72-b6f802b6db39'.
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta.tmp'
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta.tmp' to config b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta'
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "format": "json"}]: dispatch
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb  1 10:18:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb  1 10:18:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:18:40 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:18:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:18:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 62 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 114 KiB/s wr, 10 op/s
Feb  1 10:18:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb  1 10:18:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:18:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:18:43 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:18:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:18:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c38a331c-6d1f-4342-961a-602e5b4f62e5", "format": "json"}]: dispatch
Feb  1 10:18:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:43 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:43.833+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c38a331c-6d1f-4342-961a-602e5b4f62e5' of type subvolume
Feb  1 10:18:43 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c38a331c-6d1f-4342-961a-602e5b4f62e5' of type subvolume
Feb  1 10:18:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c38a331c-6d1f-4342-961a-602e5b4f62e5", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, vol_name:cephfs) < ""
Feb  1 10:18:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c38a331c-6d1f-4342-961a-602e5b4f62e5'' moved to trashcan
Feb  1 10:18:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:18:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, vol_name:cephfs) < ""
Feb  1 10:18:44 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "snap_name": "c21d430b-3b4d-4d2f-8c15-58fdd24843b4", "format": "json"}]: dispatch
Feb  1 10:18:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c21d430b-3b4d-4d2f-8c15-58fdd24843b4, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb  1 10:18:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c21d430b-3b4d-4d2f-8c15-58fdd24843b4, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb  1 10:18:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:18:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:44 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 62 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 114 KiB/s wr, 10 op/s
Feb  1 10:18:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:18:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 135 KiB/s wr, 12 op/s
Feb  1 10:18:46 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:18:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, vol_name:cephfs) < ""
Feb  1 10:18:46 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9/68e0a21d-a250-4452-852e-a3bef2850322'.
Feb  1 10:18:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9/.meta.tmp'
Feb  1 10:18:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9/.meta.tmp' to config b'/volumes/_nogroup/f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9/.meta'
Feb  1 10:18:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, vol_name:cephfs) < ""
Feb  1 10:18:46 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9", "format": "json"}]: dispatch
Feb  1 10:18:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, vol_name:cephfs) < ""
Feb  1 10:18:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, vol_name:cephfs) < ""
Feb  1 10:18:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:18:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:18:47 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:18:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:18:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:18:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb  1 10:18:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:18:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:18:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:47 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:18:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:18:47 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:18:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:18:47 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:18:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:18:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:18:47 np0005604375 nova_compute[238794]: 2026-02-01 15:18:47.608 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:18:47 np0005604375 nova_compute[238794]: 2026-02-01 15:18:47.608 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:18:47 np0005604375 nova_compute[238794]: 2026-02-01 15:18:47.608 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:18:47 np0005604375 nova_compute[238794]: 2026-02-01 15:18:47.622 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:18:47 np0005604375 nova_compute[238794]: 2026-02-01 15:18:47.622 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "snap_name": "c21d430b-3b4d-4d2f-8c15-58fdd24843b4_c0b5a8ad-609c-4622-bb29-29375e2fdb31", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c21d430b-3b4d-4d2f-8c15-58fdd24843b4_c0b5a8ad-609c-4622-bb29-29375e2fdb31, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb  1 10:18:48 np0005604375 nova_compute[238794]: 2026-02-01 15:18:48.329 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta.tmp'
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta.tmp' to config b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta'
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c21d430b-3b4d-4d2f-8c15-58fdd24843b4_c0b5a8ad-609c-4622-bb29-29375e2fdb31, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "snap_name": "c21d430b-3b4d-4d2f-8c15-58fdd24843b4", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c21d430b-3b4d-4d2f-8c15-58fdd24843b4, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta.tmp'
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta.tmp' to config b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta'
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c21d430b-3b4d-4d2f-8c15-58fdd24843b4, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 135 KiB/s wr, 12 op/s
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:18:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:18:49 np0005604375 nova_compute[238794]: 2026-02-01 15:18:49.335 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:18:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 131 KiB/s wr, 11 op/s
Feb  1 10:18:50 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:18:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:18:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:18:50 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:18:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:18:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:18:50 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:18:50 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:18:50 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:18:50 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9", "format": "json"}]: dispatch
Feb  1 10:18:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:50 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:50.905+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9' of type subvolume
Feb  1 10:18:50 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9' of type subvolume
Feb  1 10:18:50 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, vol_name:cephfs) < ""
Feb  1 10:18:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9'' moved to trashcan
Feb  1 10:18:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:18:50 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, vol_name:cephfs) < ""
Feb  1 10:18:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:18:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4008296022' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:18:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:18:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4008296022' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:18:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:18:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Feb  1 10:18:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Feb  1 10:18:51 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Feb  1 10:18:51 np0005604375 nova_compute[238794]: 2026-02-01 15:18:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:18:51 np0005604375 nova_compute[238794]: 2026-02-01 15:18:51.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:18:51 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "format": "json"}]: dispatch
Feb  1 10:18:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:51 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:51.706+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565' of type subvolume
Feb  1 10:18:51 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565' of type subvolume
Feb  1 10:18:51 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "force": true, "format": "json"}]: dispatch
Feb  1 10:18:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb  1 10:18:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565'' moved to trashcan
Feb  1 10:18:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:18:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb  1 10:18:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Feb  1 10:18:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Feb  1 10:18:52 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Feb  1 10:18:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 177 KiB/s wr, 15 op/s
Feb  1 10:18:53 np0005604375 nova_compute[238794]: 2026-02-01 15:18:53.321 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:18:54 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:18:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:18:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:18:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb  1 10:18:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:18:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:18:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:54 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:18:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:18:54 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:18:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:18:54 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:18:54 np0005604375 nova_compute[238794]: 2026-02-01 15:18:54.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:18:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 102 KiB/s wr, 9 op/s
Feb  1 10:18:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:18:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:18:54 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:18:55 np0005604375 nova_compute[238794]: 2026-02-01 15:18:55.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:18:55 np0005604375 nova_compute[238794]: 2026-02-01 15:18:55.321 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:18:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:18:56 np0005604375 nova_compute[238794]: 2026-02-01 15:18:56.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:18:56 np0005604375 nova_compute[238794]: 2026-02-01 15:18:56.350 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:18:56 np0005604375 nova_compute[238794]: 2026-02-01 15:18:56.351 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:18:56 np0005604375 nova_compute[238794]: 2026-02-01 15:18:56.351 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:18:56 np0005604375 nova_compute[238794]: 2026-02-01 15:18:56.351 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:18:56 np0005604375 nova_compute[238794]: 2026-02-01 15:18:56.352 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:18:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 64 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 164 KiB/s wr, 15 op/s
Feb  1 10:18:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:18:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2155576865' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:18:56 np0005604375 nova_compute[238794]: 2026-02-01 15:18:56.910 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:18:57 np0005604375 nova_compute[238794]: 2026-02-01 15:18:57.140 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:18:57 np0005604375 nova_compute[238794]: 2026-02-01 15:18:57.142 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5077MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:18:57 np0005604375 nova_compute[238794]: 2026-02-01 15:18:57.143 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:18:57 np0005604375 nova_compute[238794]: 2026-02-01 15:18:57.143 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:18:57 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb  1 10:18:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:57 np0005604375 nova_compute[238794]: 2026-02-01 15:18:57.228 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:18:57 np0005604375 nova_compute[238794]: 2026-02-01 15:18:57.228 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:18:57 np0005604375 nova_compute[238794]: 2026-02-01 15:18:57.242 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:18:57 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:18:57 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2031007255' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:18:57 np0005604375 nova_compute[238794]: 2026-02-01 15:18:57.842 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:18:57 np0005604375 nova_compute[238794]: 2026-02-01 15:18:57.848 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:18:57 np0005604375 nova_compute[238794]: 2026-02-01 15:18:57.861 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:18:57 np0005604375 nova_compute[238794]: 2026-02-01 15:18:57.863 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:18:57 np0005604375 nova_compute[238794]: 2026-02-01 15:18:57.863 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:18:58 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:18:58 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:18:58 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:18:58 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:18:58 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:18:58 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:18:58 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:18:58 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:18:58 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:18:58 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:18:58 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:18:58 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:18:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 64 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 164 KiB/s wr, 15 op/s
Feb  1 10:18:58 np0005604375 podman[248166]: 2026-02-01 15:18:58.769412431 +0000 UTC m=+0.050576631 container create 164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:18:58 np0005604375 systemd[1]: Started libpod-conmon-164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5.scope.
Feb  1 10:18:58 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:18:58 np0005604375 podman[248166]: 2026-02-01 15:18:58.74712121 +0000 UTC m=+0.028285450 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:18:58 np0005604375 podman[248166]: 2026-02-01 15:18:58.846697977 +0000 UTC m=+0.127862177 container init 164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:18:58 np0005604375 podman[248166]: 2026-02-01 15:18:58.852798547 +0000 UTC m=+0.133962777 container start 164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:18:58 np0005604375 podman[248166]: 2026-02-01 15:18:58.85686162 +0000 UTC m=+0.138025820 container attach 164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:18:58 np0005604375 keen_chaplygin[248182]: 167 167
Feb  1 10:18:58 np0005604375 systemd[1]: libpod-164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5.scope: Deactivated successfully.
Feb  1 10:18:58 np0005604375 podman[248166]: 2026-02-01 15:18:58.858614239 +0000 UTC m=+0.139778439 container died 164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:18:58 np0005604375 systemd[1]: var-lib-containers-storage-overlay-71521779ea9230345375d974fb643ce612c41a936eb794d8137170e9b00ab5a3-merged.mount: Deactivated successfully.
Feb  1 10:18:58 np0005604375 podman[248166]: 2026-02-01 15:18:58.900259191 +0000 UTC m=+0.181423401 container remove 164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  1 10:18:58 np0005604375 systemd[1]: libpod-conmon-164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5.scope: Deactivated successfully.
Feb  1 10:18:59 np0005604375 podman[248206]: 2026-02-01 15:18:59.054078191 +0000 UTC m=+0.037442426 container create 410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  1 10:18:59 np0005604375 systemd[1]: Started libpod-conmon-410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1.scope.
Feb  1 10:18:59 np0005604375 podman[248206]: 2026-02-01 15:18:59.037278012 +0000 UTC m=+0.020642237 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:18:59 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:18:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e813b933d9298805aebee90a9663e26635eff55c96b16c22e196ffa04577d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:18:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e813b933d9298805aebee90a9663e26635eff55c96b16c22e196ffa04577d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:18:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e813b933d9298805aebee90a9663e26635eff55c96b16c22e196ffa04577d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:18:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e813b933d9298805aebee90a9663e26635eff55c96b16c22e196ffa04577d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:18:59 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e813b933d9298805aebee90a9663e26635eff55c96b16c22e196ffa04577d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:18:59 np0005604375 podman[248206]: 2026-02-01 15:18:59.168125321 +0000 UTC m=+0.151489626 container init 410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cerf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  1 10:18:59 np0005604375 podman[248206]: 2026-02-01 15:18:59.180778254 +0000 UTC m=+0.164142459 container start 410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  1 10:18:59 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:18:59 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:18:59 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:18:59 np0005604375 podman[248206]: 2026-02-01 15:18:59.184714794 +0000 UTC m=+0.168079029 container attach 410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cerf, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  1 10:18:59 np0005604375 admiring_cerf[248222]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:18:59 np0005604375 admiring_cerf[248222]: --> All data devices are unavailable
Feb  1 10:18:59 np0005604375 systemd[1]: libpod-410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1.scope: Deactivated successfully.
Feb  1 10:18:59 np0005604375 podman[248206]: 2026-02-01 15:18:59.65929086 +0000 UTC m=+0.642655075 container died 410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:18:59 np0005604375 systemd[1]: var-lib-containers-storage-overlay-b5e813b933d9298805aebee90a9663e26635eff55c96b16c22e196ffa04577d6-merged.mount: Deactivated successfully.
Feb  1 10:18:59 np0005604375 podman[248206]: 2026-02-01 15:18:59.706513457 +0000 UTC m=+0.689877672 container remove 410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cerf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:18:59 np0005604375 systemd[1]: libpod-conmon-410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1.scope: Deactivated successfully.
Feb  1 10:18:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:18:59 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb  1 10:18:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, vol_name:cephfs) < ""
Feb  1 10:18:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, vol_name:cephfs) < ""
Feb  1 10:18:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:18:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:19:00 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb  1 10:19:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:19:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:19:00 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:19:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:19:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:00 np0005604375 podman[248318]: 2026-02-01 15:19:00.15012411 +0000 UTC m=+0.051938500 container create 26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:19:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:19:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:00 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:00 np0005604375 systemd[1]: Started libpod-conmon-26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59.scope.
Feb  1 10:19:00 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:19:00 np0005604375 podman[248318]: 2026-02-01 15:19:00.126397218 +0000 UTC m=+0.028211658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:19:00 np0005604375 podman[248318]: 2026-02-01 15:19:00.229333349 +0000 UTC m=+0.131147749 container init 26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  1 10:19:00 np0005604375 podman[248318]: 2026-02-01 15:19:00.235532962 +0000 UTC m=+0.137347372 container start 26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:19:00 np0005604375 podman[248318]: 2026-02-01 15:19:00.239400549 +0000 UTC m=+0.141214929 container attach 26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:19:00 np0005604375 jovial_faraday[248334]: 167 167
Feb  1 10:19:00 np0005604375 systemd[1]: libpod-26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59.scope: Deactivated successfully.
Feb  1 10:19:00 np0005604375 conmon[248334]: conmon 26481b27da1673ef4fa2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59.scope/container/memory.events
Feb  1 10:19:00 np0005604375 podman[248318]: 2026-02-01 15:19:00.242916708 +0000 UTC m=+0.144731098 container died 26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:19:00 np0005604375 systemd[1]: var-lib-containers-storage-overlay-da4017325c4110425e46470ba3a32c2132b196333c7896affb487cfab659f4be-merged.mount: Deactivated successfully.
Feb  1 10:19:00 np0005604375 podman[248318]: 2026-02-01 15:19:00.280499376 +0000 UTC m=+0.182313786 container remove 26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  1 10:19:00 np0005604375 systemd[1]: libpod-conmon-26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59.scope: Deactivated successfully.
Feb  1 10:19:00 np0005604375 podman[248357]: 2026-02-01 15:19:00.454937481 +0000 UTC m=+0.049731068 container create 1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_yalow, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:19:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 64 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 335 B/s rd, 54 KiB/s wr, 5 op/s
Feb  1 10:19:00 np0005604375 systemd[1]: Started libpod-conmon-1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7.scope.
Feb  1 10:19:00 np0005604375 podman[248357]: 2026-02-01 15:19:00.428893984 +0000 UTC m=+0.023687621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:19:00 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:19:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5583fd4196955b34f4e7bb25402400d199040aa8d582c9f74de718a126c91fc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:19:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5583fd4196955b34f4e7bb25402400d199040aa8d582c9f74de718a126c91fc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:19:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5583fd4196955b34f4e7bb25402400d199040aa8d582c9f74de718a126c91fc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:19:00 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5583fd4196955b34f4e7bb25402400d199040aa8d582c9f74de718a126c91fc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:19:00 np0005604375 podman[248357]: 2026-02-01 15:19:00.576587664 +0000 UTC m=+0.171381301 container init 1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:19:00 np0005604375 podman[248357]: 2026-02-01 15:19:00.583080225 +0000 UTC m=+0.177873812 container start 1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_yalow, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  1 10:19:00 np0005604375 podman[248357]: 2026-02-01 15:19:00.586730937 +0000 UTC m=+0.181524534 container attach 1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]: {
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:    "0": [
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:        {
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "devices": [
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "/dev/loop3"
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            ],
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_name": "ceph_lv0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_size": "21470642176",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "name": "ceph_lv0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "tags": {
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.cluster_name": "ceph",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.crush_device_class": "",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.encrypted": "0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.objectstore": "bluestore",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.osd_id": "0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.type": "block",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.vdo": "0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.with_tpm": "0"
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            },
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "type": "block",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "vg_name": "ceph_vg0"
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:        }
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:    ],
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:    "1": [
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:        {
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "devices": [
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "/dev/loop4"
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            ],
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_name": "ceph_lv1",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_size": "21470642176",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "name": "ceph_lv1",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "tags": {
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.cluster_name": "ceph",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.crush_device_class": "",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.encrypted": "0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.objectstore": "bluestore",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.osd_id": "1",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.type": "block",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.vdo": "0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.with_tpm": "0"
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            },
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "type": "block",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "vg_name": "ceph_vg1"
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:        }
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:    ],
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:    "2": [
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:        {
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "devices": [
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "/dev/loop5"
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            ],
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_name": "ceph_lv2",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_size": "21470642176",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "name": "ceph_lv2",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "tags": {
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.cluster_name": "ceph",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.crush_device_class": "",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.encrypted": "0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.objectstore": "bluestore",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.osd_id": "2",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.type": "block",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.vdo": "0",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:                "ceph.with_tpm": "0"
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            },
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "type": "block",
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:            "vg_name": "ceph_vg2"
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:        }
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]:    ]
Feb  1 10:19:00 np0005604375 admiring_yalow[248374]: }
Feb  1 10:19:00 np0005604375 systemd[1]: libpod-1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7.scope: Deactivated successfully.
Feb  1 10:19:00 np0005604375 podman[248357]: 2026-02-01 15:19:00.916277068 +0000 UTC m=+0.511070645 container died 1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  1 10:19:00 np0005604375 systemd[1]: var-lib-containers-storage-overlay-5583fd4196955b34f4e7bb25402400d199040aa8d582c9f74de718a126c91fc2-merged.mount: Deactivated successfully.
Feb  1 10:19:00 np0005604375 podman[248357]: 2026-02-01 15:19:00.963413632 +0000 UTC m=+0.558207189 container remove 1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_yalow, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:19:00 np0005604375 systemd[1]: libpod-conmon-1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7.scope: Deactivated successfully.
Feb  1 10:19:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:19:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Feb  1 10:19:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Feb  1 10:19:01 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Feb  1 10:19:01 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:19:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:19:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:19:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb  1 10:19:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:19:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:19:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:01 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:19:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:19:01 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:19:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:19:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:01 np0005604375 podman[248456]: 2026-02-01 15:19:01.490060101 +0000 UTC m=+0.065202160 container create 37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_moore, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  1 10:19:01 np0005604375 systemd[1]: Started libpod-conmon-37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a.scope.
Feb  1 10:19:01 np0005604375 podman[248456]: 2026-02-01 15:19:01.463788658 +0000 UTC m=+0.038930777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:19:01 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:19:01 np0005604375 podman[248456]: 2026-02-01 15:19:01.58326016 +0000 UTC m=+0.158402219 container init 37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:19:01 np0005604375 podman[248456]: 2026-02-01 15:19:01.59258843 +0000 UTC m=+0.167730459 container start 37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_moore, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  1 10:19:01 np0005604375 podman[248456]: 2026-02-01 15:19:01.595724218 +0000 UTC m=+0.170866247 container attach 37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_moore, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  1 10:19:01 np0005604375 tender_moore[248474]: 167 167
Feb  1 10:19:01 np0005604375 systemd[1]: libpod-37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a.scope: Deactivated successfully.
Feb  1 10:19:01 np0005604375 podman[248456]: 2026-02-01 15:19:01.597996851 +0000 UTC m=+0.173138910 container died 37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:19:01 np0005604375 systemd[1]: var-lib-containers-storage-overlay-2b1a5771207a9792079b9fbbdda377153bc1cac57725869e46bdaaa1f5fef6d4-merged.mount: Deactivated successfully.
Feb  1 10:19:01 np0005604375 podman[248456]: 2026-02-01 15:19:01.650471683 +0000 UTC m=+0.225613742 container remove 37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_moore, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:19:01 np0005604375 systemd[1]: libpod-conmon-37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a.scope: Deactivated successfully.
Feb  1 10:19:01 np0005604375 podman[248498]: 2026-02-01 15:19:01.839923187 +0000 UTC m=+0.056820075 container create 190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 10:19:01 np0005604375 systemd[1]: Started libpod-conmon-190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493.scope.
Feb  1 10:19:01 np0005604375 podman[248498]: 2026-02-01 15:19:01.817874032 +0000 UTC m=+0.034770920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:19:01 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:19:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3072bebe0bc5f0eb061decf0121009c0a1677cc28ce5441c479ac64bb7454ba4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:19:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3072bebe0bc5f0eb061decf0121009c0a1677cc28ce5441c479ac64bb7454ba4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:19:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3072bebe0bc5f0eb061decf0121009c0a1677cc28ce5441c479ac64bb7454ba4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:19:01 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3072bebe0bc5f0eb061decf0121009c0a1677cc28ce5441c479ac64bb7454ba4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:19:01 np0005604375 podman[248498]: 2026-02-01 15:19:01.948993389 +0000 UTC m=+0.165890277 container init 190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  1 10:19:01 np0005604375 podman[248498]: 2026-02-01 15:19:01.963509234 +0000 UTC m=+0.180406122 container start 190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sinoussi, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  1 10:19:01 np0005604375 podman[248498]: 2026-02-01 15:19:01.970515449 +0000 UTC m=+0.187412337 container attach 190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sinoussi, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  1 10:19:02 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:19:02 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:19:02 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:19:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 93 KiB/s wr, 8 op/s
Feb  1 10:19:02 np0005604375 lvm[248594]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:19:02 np0005604375 lvm[248594]: VG ceph_vg1 finished
Feb  1 10:19:02 np0005604375 lvm[248593]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:19:02 np0005604375 lvm[248593]: VG ceph_vg0 finished
Feb  1 10:19:02 np0005604375 lvm[248596]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:19:02 np0005604375 lvm[248596]: VG ceph_vg2 finished
Feb  1 10:19:02 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:19:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb  1 10:19:02 np0005604375 competent_sinoussi[248515]: {}
Feb  1 10:19:02 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d707ecfe-f6ee-49fe-a02c-3c565e379dff/ddd00d22-d077-475e-a668-ba7be553860a'.
Feb  1 10:19:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d707ecfe-f6ee-49fe-a02c-3c565e379dff/.meta.tmp'
Feb  1 10:19:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d707ecfe-f6ee-49fe-a02c-3c565e379dff/.meta.tmp' to config b'/volumes/_nogroup/d707ecfe-f6ee-49fe-a02c-3c565e379dff/.meta'
Feb  1 10:19:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb  1 10:19:02 np0005604375 systemd[1]: libpod-190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493.scope: Deactivated successfully.
Feb  1 10:19:02 np0005604375 systemd[1]: libpod-190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493.scope: Consumed 1.362s CPU time.
Feb  1 10:19:02 np0005604375 podman[248498]: 2026-02-01 15:19:02.837279494 +0000 UTC m=+1.054176382 container died 190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sinoussi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  1 10:19:02 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "format": "json"}]: dispatch
Feb  1 10:19:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb  1 10:19:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb  1 10:19:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:19:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:19:02 np0005604375 systemd[1]: var-lib-containers-storage-overlay-3072bebe0bc5f0eb061decf0121009c0a1677cc28ce5441c479ac64bb7454ba4-merged.mount: Deactivated successfully.
Feb  1 10:19:02 np0005604375 podman[248498]: 2026-02-01 15:19:02.8744216 +0000 UTC m=+1.091318448 container remove 190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sinoussi, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  1 10:19:02 np0005604375 systemd[1]: libpod-conmon-190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493.scope: Deactivated successfully.
Feb  1 10:19:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:19:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:19:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:19:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:19:03 np0005604375 podman[248636]: 2026-02-01 15:19:03.13580939 +0000 UTC m=+0.117625532 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  1 10:19:03 np0005604375 podman[248637]: 2026-02-01 15:19:03.156178088 +0000 UTC m=+0.137541837 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  1 10:19:03 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:19:03 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:19:03 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5f371bcf-0672-4b5f-9567-1fcaf6940905", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:19:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, vol_name:cephfs) < ""
Feb  1 10:19:03 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5f371bcf-0672-4b5f-9567-1fcaf6940905/ca3ae955-cb00-4008-bc9e-6ebd7fc60edf'.
Feb  1 10:19:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5f371bcf-0672-4b5f-9567-1fcaf6940905/.meta.tmp'
Feb  1 10:19:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5f371bcf-0672-4b5f-9567-1fcaf6940905/.meta.tmp' to config b'/volumes/_nogroup/5f371bcf-0672-4b5f-9567-1fcaf6940905/.meta'
Feb  1 10:19:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, vol_name:cephfs) < ""
Feb  1 10:19:03 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5f371bcf-0672-4b5f-9567-1fcaf6940905", "format": "json"}]: dispatch
Feb  1 10:19:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, vol_name:cephfs) < ""
Feb  1 10:19:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, vol_name:cephfs) < ""
Feb  1 10:19:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:19:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:19:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 93 KiB/s wr, 8 op/s
Feb  1 10:19:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:19:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:19:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:19:04 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:19:04 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:19:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:04 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:05 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:19:05 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:05 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:06 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "new_size": 2147483648, "format": "json"}]: dispatch
Feb  1 10:19:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb  1 10:19:06 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb  1 10:19:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:19:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 65 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 111 KiB/s wr, 9 op/s
Feb  1 10:19:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7b1b736a-26a1-4658-8b8f-779a2b222e80", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:19:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, vol_name:cephfs) < ""
Feb  1 10:19:07 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7b1b736a-26a1-4658-8b8f-779a2b222e80/0af9b831-6215-4111-ba2a-47cc2086c878'.
Feb  1 10:19:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b1b736a-26a1-4658-8b8f-779a2b222e80/.meta.tmp'
Feb  1 10:19:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b1b736a-26a1-4658-8b8f-779a2b222e80/.meta.tmp' to config b'/volumes/_nogroup/7b1b736a-26a1-4658-8b8f-779a2b222e80/.meta'
Feb  1 10:19:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, vol_name:cephfs) < ""
Feb  1 10:19:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7b1b736a-26a1-4658-8b8f-779a2b222e80", "format": "json"}]: dispatch
Feb  1 10:19:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, vol_name:cephfs) < ""
Feb  1 10:19:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, vol_name:cephfs) < ""
Feb  1 10:19:07 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:19:07 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:19:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:19:07.814 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:19:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:19:07.815 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:19:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:19:07.815 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:19:08 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:19:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:19:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:19:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb  1 10:19:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:19:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:19:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:08 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:19:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:19:08 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:19:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:19:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 65 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 111 KiB/s wr, 9 op/s
Feb  1 10:19:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:19:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:19:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:19:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "format": "json"}]: dispatch
Feb  1 10:19:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:09 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:09.872+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd707ecfe-f6ee-49fe-a02c-3c565e379dff' of type subvolume
Feb  1 10:19:09 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd707ecfe-f6ee-49fe-a02c-3c565e379dff' of type subvolume
Feb  1 10:19:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "force": true, "format": "json"}]: dispatch
Feb  1 10:19:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb  1 10:19:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d707ecfe-f6ee-49fe-a02c-3c565e379dff'' moved to trashcan
Feb  1 10:19:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:19:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb  1 10:19:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 65 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 111 KiB/s wr, 9 op/s
Feb  1 10:19:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:19:11 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb  1 10:19:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:19:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:19:11 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:19:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:19:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:11 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d57288c1-6475-4afc-b89b-63e0397aa3d5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:19:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, vol_name:cephfs) < ""
Feb  1 10:19:11 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d57288c1-6475-4afc-b89b-63e0397aa3d5/24b4e50d-218b-41bb-b9dd-f25fddccd8d7'.
Feb  1 10:19:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d57288c1-6475-4afc-b89b-63e0397aa3d5/.meta.tmp'
Feb  1 10:19:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d57288c1-6475-4afc-b89b-63e0397aa3d5/.meta.tmp' to config b'/volumes/_nogroup/d57288c1-6475-4afc-b89b-63e0397aa3d5/.meta'
Feb  1 10:19:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, vol_name:cephfs) < ""
Feb  1 10:19:11 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d57288c1-6475-4afc-b89b-63e0397aa3d5", "format": "json"}]: dispatch
Feb  1 10:19:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, vol_name:cephfs) < ""
Feb  1 10:19:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, vol_name:cephfs) < ""
Feb  1 10:19:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:19:11 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:19:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 91 B/s rd, 118 KiB/s wr, 8 op/s
Feb  1 10:19:12 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:19:12 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:12 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 110 KiB/s wr, 8 op/s
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb  1 10:19:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:19:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb  1 10:19:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:19:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:19:15 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb  1 10:19:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb  1 10:19:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7bb072d4-78e4-494f-ab70-eb9c366fac63", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, vol_name:cephfs) < ""
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7bb072d4-78e4-494f-ab70-eb9c366fac63/3168f927-e301-452a-884c-a434cfe97158'.
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7bb072d4-78e4-494f-ab70-eb9c366fac63/.meta.tmp'
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7bb072d4-78e4-494f-ab70-eb9c366fac63/.meta.tmp' to config b'/volumes/_nogroup/7bb072d4-78e4-494f-ab70-eb9c366fac63/.meta'
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, vol_name:cephfs) < ""
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7bb072d4-78e4-494f-ab70-eb9c366fac63", "format": "json"}]: dispatch
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, vol_name:cephfs) < ""
Feb  1 10:19:15 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, vol_name:cephfs) < ""
Feb  1 10:19:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:19:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:19:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:19:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 160 KiB/s wr, 13 op/s
Feb  1 10:19:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:19:17
Feb  1 10:19:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:19:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:19:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.log', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.rgw.root']
Feb  1 10:19:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/db1d1fea-0e00-4e6b-b733-ef0fe090c2f5/10b4830f-ffdf-472e-bb12-472493dd5549'.
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/db1d1fea-0e00-4e6b-b733-ef0fe090c2f5/.meta.tmp'
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/db1d1fea-0e00-4e6b-b733-ef0fe090c2f5/.meta.tmp' to config b'/volumes/_nogroup/db1d1fea-0e00-4e6b-b733-ef0fe090c2f5/.meta'
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "format": "json"}]: dispatch
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb  1 10:19:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:19:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 104 KiB/s wr, 8 op/s
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:19:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:19:18 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:19:18 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:19:18 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:19:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:18 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:19:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:19:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 104 KiB/s wr, 8 op/s
Feb  1 10:19:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7bb072d4-78e4-494f-ab70-eb9c366fac63", "format": "json"}]: dispatch
Feb  1 10:19:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:20 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:20.619+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7bb072d4-78e4-494f-ab70-eb9c366fac63' of type subvolume
Feb  1 10:19:20 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7bb072d4-78e4-494f-ab70-eb9c366fac63' of type subvolume
Feb  1 10:19:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7bb072d4-78e4-494f-ab70-eb9c366fac63", "force": true, "format": "json"}]: dispatch
Feb  1 10:19:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, vol_name:cephfs) < ""
Feb  1 10:19:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7bb072d4-78e4-494f-ab70-eb9c366fac63'' moved to trashcan
Feb  1 10:19:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:19:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, vol_name:cephfs) < ""
Feb  1 10:19:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:19:21 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Feb  1 10:19:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb  1 10:19:21 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb  1 10:19:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:19:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:19:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:19:22 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb  1 10:19:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:19:22 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:19:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:22 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:19:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:19:22 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:19:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:19:22 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 67 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 171 KiB/s wr, 13 op/s
Feb  1 10:19:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:19:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:19:22 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d57288c1-6475-4afc-b89b-63e0397aa3d5", "format": "json"}]: dispatch
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd57288c1-6475-4afc-b89b-63e0397aa3d5' of type subvolume
Feb  1 10:19:24 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:24.197+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd57288c1-6475-4afc-b89b-63e0397aa3d5' of type subvolume
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d57288c1-6475-4afc-b89b-63e0397aa3d5", "force": true, "format": "json"}]: dispatch
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, vol_name:cephfs) < ""
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d57288c1-6475-4afc-b89b-63e0397aa3d5'' moved to trashcan
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, vol_name:cephfs) < ""
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 67 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 117 KiB/s wr, 9 op/s
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "format": "json"}]: dispatch
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:24 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:24.938+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'db1d1fea-0e00-4e6b-b733-ef0fe090c2f5' of type subvolume
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'db1d1fea-0e00-4e6b-b733-ef0fe090c2f5' of type subvolume
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "force": true, "format": "json"}]: dispatch
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/db1d1fea-0e00-4e6b-b733-ef0fe090c2f5'' moved to trashcan
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:19:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb  1 10:19:25 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb  1 10:19:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:19:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:19:25 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:19:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:19:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:25 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:25 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:19:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:19:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 175 KiB/s wr, 15 op/s
Feb  1 10:19:27 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7b1b736a-26a1-4658-8b8f-779a2b222e80", "format": "json"}]: dispatch
Feb  1 10:19:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:27 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7b1b736a-26a1-4658-8b8f-779a2b222e80' of type subvolume
Feb  1 10:19:27 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:27.670+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7b1b736a-26a1-4658-8b8f-779a2b222e80' of type subvolume
Feb  1 10:19:27 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7b1b736a-26a1-4658-8b8f-779a2b222e80", "force": true, "format": "json"}]: dispatch
Feb  1 10:19:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, vol_name:cephfs) < ""
Feb  1 10:19:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7b1b736a-26a1-4658-8b8f-779a2b222e80'' moved to trashcan
Feb  1 10:19:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:19:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, vol_name:cephfs) < ""
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "69f497f0-f1d5-405b-b865-e545c0627b3a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:69f497f0-f1d5-405b-b865-e545c0627b3a, vol_name:cephfs) < ""
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659708118084749 of space, bias 1.0, pg target 0.1997912435425425 quantized to 32 (current 32)
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0004374240069927442 of space, bias 4.0, pg target 0.5249088083912931 quantized to 16 (current 16)
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053787e-07 of space, bias 1.0, pg target 0.0001907721234616136 quantized to 32 (current 32)
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/69f497f0-f1d5-405b-b865-e545c0627b3a/a80aa19e-424e-4e1a-a7e9-653f5a86eda0'.
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/69f497f0-f1d5-405b-b865-e545c0627b3a/.meta.tmp'
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/69f497f0-f1d5-405b-b865-e545c0627b3a/.meta.tmp' to config b'/volumes/_nogroup/69f497f0-f1d5-405b-b865-e545c0627b3a/.meta'
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:69f497f0-f1d5-405b-b865-e545c0627b3a, vol_name:cephfs) < ""
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "69f497f0-f1d5-405b-b865-e545c0627b3a", "format": "json"}]: dispatch
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:69f497f0-f1d5-405b-b865-e545c0627b3a, vol_name:cephfs) < ""
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:69f497f0-f1d5-405b-b865-e545c0627b3a, vol_name:cephfs) < ""
Feb  1 10:19:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:19:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 125 KiB/s wr, 10 op/s
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb  1 10:19:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:19:28 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb  1 10:19:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:19:28 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:19:28 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:19:28 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb  1 10:19:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb  1 10:19:29 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb  1 10:19:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 125 KiB/s wr, 10 op/s
Feb  1 10:19:31 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5f371bcf-0672-4b5f-9567-1fcaf6940905", "format": "json"}]: dispatch
Feb  1 10:19:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:31 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:31.167+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5f371bcf-0672-4b5f-9567-1fcaf6940905' of type subvolume
Feb  1 10:19:31 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5f371bcf-0672-4b5f-9567-1fcaf6940905' of type subvolume
Feb  1 10:19:31 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5f371bcf-0672-4b5f-9567-1fcaf6940905", "force": true, "format": "json"}]: dispatch
Feb  1 10:19:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, vol_name:cephfs) < ""
Feb  1 10:19:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5f371bcf-0672-4b5f-9567-1fcaf6940905'' moved to trashcan
Feb  1 10:19:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:19:31 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, vol_name:cephfs) < ""
Feb  1 10:19:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:19:32 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:19:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:32 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:19:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:19:32 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:19:32 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:19:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:32 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:32 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 177 KiB/s wr, 15 op/s
Feb  1 10:19:32 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:19:32 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:32 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:33 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "69f497f0-f1d5-405b-b865-e545c0627b3a", "format": "json"}]: dispatch
Feb  1 10:19:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:69f497f0-f1d5-405b-b865-e545c0627b3a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:69f497f0-f1d5-405b-b865-e545c0627b3a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:33 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:33.324+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '69f497f0-f1d5-405b-b865-e545c0627b3a' of type subvolume
Feb  1 10:19:33 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '69f497f0-f1d5-405b-b865-e545c0627b3a' of type subvolume
Feb  1 10:19:33 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "69f497f0-f1d5-405b-b865-e545c0627b3a", "force": true, "format": "json"}]: dispatch
Feb  1 10:19:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:69f497f0-f1d5-405b-b865-e545c0627b3a, vol_name:cephfs) < ""
Feb  1 10:19:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/69f497f0-f1d5-405b-b865-e545c0627b3a'' moved to trashcan
Feb  1 10:19:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:19:33 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:69f497f0-f1d5-405b-b865-e545c0627b3a, vol_name:cephfs) < ""
Feb  1 10:19:33 np0005604375 podman[248687]: 2026-02-01 15:19:33.969991419 +0000 UTC m=+0.055113488 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb  1 10:19:34 np0005604375 podman[248688]: 2026-02-01 15:19:34.000101139 +0000 UTC m=+0.083812528 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Feb  1 10:19:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 110 KiB/s wr, 10 op/s
Feb  1 10:19:35 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:19:35.854 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  1 10:19:35 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:19:35.855 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  1 10:19:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:19:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:19:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:19:35 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb  1 10:19:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:19:35 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:19:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:19:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:19:35 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:19:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:19:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:19:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 152 KiB/s wr, 15 op/s
Feb  1 10:19:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:19:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:19:36 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:19:36 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb  1 10:19:36 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:36 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:19:36.857 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  1 10:19:36 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:36 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "force": true, "format": "json"}]: dispatch
Feb  1 10:19:36 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, vol_name:cephfs) < ""
Feb  1 10:19:36 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e'' moved to trashcan
Feb  1 10:19:36 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:19:36 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, vol_name:cephfs) < ""
Feb  1 10:19:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 93 KiB/s wr, 9 op/s
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.959980) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959178960027, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2421, "num_deletes": 257, "total_data_size": 3036464, "memory_usage": 3079368, "flush_reason": "Manual Compaction"}
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959178975366, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2986253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21479, "largest_seqno": 23899, "table_properties": {"data_size": 2975431, "index_size": 6612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 26726, "raw_average_key_size": 21, "raw_value_size": 2952054, "raw_average_value_size": 2392, "num_data_blocks": 292, "num_entries": 1234, "num_filter_entries": 1234, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769959040, "oldest_key_time": 1769959040, "file_creation_time": 1769959178, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 15460 microseconds, and 8364 cpu microseconds.
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.975436) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2986253 bytes OK
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.975462) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.976958) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.976981) EVENT_LOG_v1 {"time_micros": 1769959178976973, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.977006) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3025416, prev total WAL file size 3025416, number of live WAL files 2.
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.977844) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2916KB)], [50(7280KB)]
Feb  1 10:19:38 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959178977905, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10441944, "oldest_snapshot_seqno": -1}
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5446 keys, 8638825 bytes, temperature: kUnknown
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959179029928, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8638825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8600961, "index_size": 23162, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 134650, "raw_average_key_size": 24, "raw_value_size": 8501734, "raw_average_value_size": 1561, "num_data_blocks": 962, "num_entries": 5446, "num_filter_entries": 5446, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769959178, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.030235) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8638825 bytes
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.031980) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.4 rd, 165.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 7.1 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 5979, records dropped: 533 output_compression: NoCompression
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.032008) EVENT_LOG_v1 {"time_micros": 1769959179031994, "job": 26, "event": "compaction_finished", "compaction_time_micros": 52114, "compaction_time_cpu_micros": 27779, "output_level": 6, "num_output_files": 1, "total_output_size": 8638825, "num_input_records": 5979, "num_output_records": 5446, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959179032609, "job": 26, "event": "table_file_deletion", "file_number": 52}
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959179034045, "job": 26, "event": "table_file_deletion", "file_number": 50}
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.977734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.034182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.034192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.034195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.034199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.034202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:19:39 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb  1 10:19:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:19:39 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:39 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:40 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "snap_name": "383b4f57-c12d-4143-bc64-f94b56aa4406_f84678b0-2860-4390-8392-13cdcac44563", "force": true, "format": "json"}]: dispatch
Feb  1 10:19:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406_f84678b0-2860-4390-8392-13cdcac44563, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb  1 10:19:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp'
Feb  1 10:19:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp' to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta'
Feb  1 10:19:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406_f84678b0-2860-4390-8392-13cdcac44563, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb  1 10:19:40 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "snap_name": "383b4f57-c12d-4143-bc64-f94b56aa4406", "force": true, "format": "json"}]: dispatch
Feb  1 10:19:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb  1 10:19:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp'
Feb  1 10:19:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp' to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta'
Feb  1 10:19:40 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb  1 10:19:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 93 KiB/s wr, 9 op/s
Feb  1 10:19:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:19:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 157 KiB/s wr, 15 op/s
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb  1 10:19:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:19:43 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb  1 10:19:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:19:43 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:19:43 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "format": "json"}]: dispatch
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:43 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:43.870+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e1bb4ab8-c449-4ad1-83d0-cba448059572' of type subvolume
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e1bb4ab8-c449-4ad1-83d0-cba448059572' of type subvolume
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "force": true, "format": "json"}]: dispatch
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572'' moved to trashcan
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:19:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb  1 10:19:43 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb  1 10:19:43 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb  1 10:19:43 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb  1 10:19:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 105 KiB/s wr, 10 op/s
Feb  1 10:19:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Feb  1 10:19:44 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Feb  1 10:19:45 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Feb  1 10:19:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:19:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 110 KiB/s wr, 11 op/s
Feb  1 10:19:46 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:19:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Feb  1 10:19:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb  1 10:19:46 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb  1 10:19:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb  1 10:19:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:46 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb  1 10:19:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb  1 10:19:47 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb  1 10:19:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 110 KiB/s wr, 11 op/s
Feb  1 10:19:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:19:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825bec9940>)]
Feb  1 10:19:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb  1 10:19:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:19:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825b5d27c0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825be5cdf0>)]
Feb  1 10:19:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb  1 10:19:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb  1 10:19:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:19:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:19:49 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.viosrg(active, since 29m)
Feb  1 10:19:49 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5b1172f7-abae-4452-a7de-df2b972dd4b6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:19:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, vol_name:cephfs) < ""
Feb  1 10:19:49 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5b1172f7-abae-4452-a7de-df2b972dd4b6/a22da935-3d14-467d-800a-8fe6059d4763'.
Feb  1 10:19:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5b1172f7-abae-4452-a7de-df2b972dd4b6/.meta.tmp'
Feb  1 10:19:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5b1172f7-abae-4452-a7de-df2b972dd4b6/.meta.tmp' to config b'/volumes/_nogroup/5b1172f7-abae-4452-a7de-df2b972dd4b6/.meta'
Feb  1 10:19:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, vol_name:cephfs) < ""
Feb  1 10:19:49 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5b1172f7-abae-4452-a7de-df2b972dd4b6", "format": "json"}]: dispatch
Feb  1 10:19:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, vol_name:cephfs) < ""
Feb  1 10:19:49 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, vol_name:cephfs) < ""
Feb  1 10:19:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:19:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:19:49 np0005604375 nova_compute[238794]: 2026-02-01 15:19:49.863 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:19:49 np0005604375 nova_compute[238794]: 2026-02-01 15:19:49.864 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:19:49 np0005604375 nova_compute[238794]: 2026-02-01 15:19:49.864 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:19:49 np0005604375 nova_compute[238794]: 2026-02-01 15:19:49.881 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:19:49 np0005604375 nova_compute[238794]: 2026-02-01 15:19:49.881 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:19:50 np0005604375 nova_compute[238794]: 2026-02-01 15:19:50.333 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:19:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 110 KiB/s wr, 11 op/s
Feb  1 10:19:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:19:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2063902718' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:19:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:19:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2063902718' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:19:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:19:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Feb  1 10:19:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Feb  1 10:19:51 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Feb  1 10:19:51 np0005604375 nova_compute[238794]: 2026-02-01 15:19:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:19:51 np0005604375 nova_compute[238794]: 2026-02-01 15:19:51.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:19:51 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:19:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb  1 10:19:51 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202'.
Feb  1 10:19:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/.meta.tmp'
Feb  1 10:19:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/.meta.tmp' to config b'/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/.meta'
Feb  1 10:19:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb  1 10:19:51 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "format": "json"}]: dispatch
Feb  1 10:19:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb  1 10:19:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb  1 10:19:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:19:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:19:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 112 KiB/s wr, 11 op/s
Feb  1 10:19:52 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a8d162ed-6915-4c91-85d0-a5648c53b8d8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:19:52 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, vol_name:cephfs) < ""
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a8d162ed-6915-4c91-85d0-a5648c53b8d8/5b65efb9-bab8-427f-8dd7-fedcb50bea0f'.
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a8d162ed-6915-4c91-85d0-a5648c53b8d8/.meta.tmp'
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8d162ed-6915-4c91-85d0-a5648c53b8d8/.meta.tmp' to config b'/volumes/_nogroup/a8d162ed-6915-4c91-85d0-a5648c53b8d8/.meta'
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, vol_name:cephfs) < ""
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a8d162ed-6915-4c91-85d0-a5648c53b8d8", "format": "json"}]: dispatch
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, vol_name:cephfs) < ""
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, vol_name:cephfs) < ""
Feb  1 10:19:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:19:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a8d162ed-6915-4c91-85d0-a5648c53b8d8", "format": "json"}]: dispatch
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:53 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:53.593+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a8d162ed-6915-4c91-85d0-a5648c53b8d8' of type subvolume
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a8d162ed-6915-4c91-85d0-a5648c53b8d8' of type subvolume
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a8d162ed-6915-4c91-85d0-a5648c53b8d8", "force": true, "format": "json"}]: dispatch
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, vol_name:cephfs) < ""
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a8d162ed-6915-4c91-85d0-a5648c53b8d8'' moved to trashcan
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:19:53 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, vol_name:cephfs) < ""
Feb  1 10:19:54 np0005604375 nova_compute[238794]: 2026-02-01 15:19:54.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:19:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 539 B/s rd, 94 KiB/s wr, 9 op/s
Feb  1 10:19:55 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "auth_id": "bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb  1 10:19:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Feb  1 10:19:55 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb  1 10:19:55 np0005604375 nova_compute[238794]: 2026-02-01 15:19:55.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:19:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2,allow rw path=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5f542ebc-7768-479b-a371-3e911afa4848"]} v 0)
Feb  1 10:19:55 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2,allow rw path=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5f542ebc-7768-479b-a371-3e911afa4848"]} : dispatch
Feb  1 10:19:55 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2,allow rw path=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5f542ebc-7768-479b-a371-3e911afa4848"]}]': finished
Feb  1 10:19:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Feb  1 10:19:55 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb  1 10:19:55 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb  1 10:19:56 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb  1 10:19:56 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2,allow rw path=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5f542ebc-7768-479b-a371-3e911afa4848"]} : dispatch
Feb  1 10:19:56 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2,allow rw path=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5f542ebc-7768-479b-a371-3e911afa4848"]}]': finished
Feb  1 10:19:56 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb  1 10:19:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:19:56 np0005604375 nova_compute[238794]: 2026-02-01 15:19:56.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:19:56 np0005604375 nova_compute[238794]: 2026-02-01 15:19:56.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:19:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 112 KiB/s wr, 9 op/s
Feb  1 10:19:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:19:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:19:56 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/aaba7f52-5353-40f7-aa14-6d95137a862b'.
Feb  1 10:19:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb  1 10:19:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb  1 10:19:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:19:56 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "format": "json"}]: dispatch
Feb  1 10:19:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:19:56 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:19:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:19:56 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:19:57 np0005604375 nova_compute[238794]: 2026-02-01 15:19:57.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:19:57 np0005604375 nova_compute[238794]: 2026-02-01 15:19:57.346 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:19:57 np0005604375 nova_compute[238794]: 2026-02-01 15:19:57.347 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:19:57 np0005604375 nova_compute[238794]: 2026-02-01 15:19:57.347 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:19:57 np0005604375 nova_compute[238794]: 2026-02-01 15:19:57.348 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:19:57 np0005604375 nova_compute[238794]: 2026-02-01 15:19:57.348 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:19:57 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5b1172f7-abae-4452-a7de-df2b972dd4b6", "format": "json"}]: dispatch
Feb  1 10:19:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:19:57 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:57.709+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b1172f7-abae-4452-a7de-df2b972dd4b6' of type subvolume
Feb  1 10:19:57 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b1172f7-abae-4452-a7de-df2b972dd4b6' of type subvolume
Feb  1 10:19:57 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5b1172f7-abae-4452-a7de-df2b972dd4b6", "force": true, "format": "json"}]: dispatch
Feb  1 10:19:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, vol_name:cephfs) < ""
Feb  1 10:19:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5b1172f7-abae-4452-a7de-df2b972dd4b6'' moved to trashcan
Feb  1 10:19:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:19:57 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, vol_name:cephfs) < ""
Feb  1 10:19:57 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:19:57 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1847495700' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:19:57 np0005604375 nova_compute[238794]: 2026-02-01 15:19:57.930 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:19:58 np0005604375 nova_compute[238794]: 2026-02-01 15:19:58.056 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:19:58 np0005604375 nova_compute[238794]: 2026-02-01 15:19:58.057 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5060MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:19:58 np0005604375 nova_compute[238794]: 2026-02-01 15:19:58.058 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:19:58 np0005604375 nova_compute[238794]: 2026-02-01 15:19:58.058 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:19:58 np0005604375 nova_compute[238794]: 2026-02-01 15:19:58.136 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:19:58 np0005604375 nova_compute[238794]: 2026-02-01 15:19:58.136 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:19:58 np0005604375 nova_compute[238794]: 2026-02-01 15:19:58.164 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:19:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 112 KiB/s wr, 9 op/s
Feb  1 10:19:58 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:19:58 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/521019029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:19:58 np0005604375 nova_compute[238794]: 2026-02-01 15:19:58.647 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:19:58 np0005604375 nova_compute[238794]: 2026-02-01 15:19:58.653 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:19:58 np0005604375 nova_compute[238794]: 2026-02-01 15:19:58.670 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:19:58 np0005604375 nova_compute[238794]: 2026-02-01 15:19:58.672 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:19:58 np0005604375 nova_compute[238794]: 2026-02-01 15:19:58.672 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:19:58 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "auth_id": "bob", "format": "json"}]: dispatch
Feb  1 10:19:58 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb  1 10:19:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Feb  1 10:19:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb  1 10:19:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280"]} v 0)
Feb  1 10:19:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280"]} : dispatch
Feb  1 10:19:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280"]}]': finished
Feb  1 10:19:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb  1 10:19:59 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "auth_id": "bob", "format": "json"}]: dispatch
Feb  1 10:19:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb  1 10:19:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202
Feb  1 10:19:59 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202],prefix=session evict} (starting...)
Feb  1 10:19:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:19:59 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb  1 10:19:59 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb  1 10:19:59 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280"]} : dispatch
Feb  1 10:19:59 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280"]}]': finished
Feb  1 10:20:00 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "57052b66-4ef2-422d-b6cb-d8da260acde1", "format": "json"}]: dispatch
Feb  1 10:20:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:57052b66-4ef2-422d-b6cb-d8da260acde1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:00 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:57052b66-4ef2-422d-b6cb-d8da260acde1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:00 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:20:00 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5241 writes, 24K keys, 5241 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 5241 writes, 5241 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1856 writes, 8970 keys, 1856 commit groups, 1.0 writes per commit group, ingest: 11.37 MB, 0.02 MB/s#012Interval WAL: 1856 writes, 1856 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    174.6      0.16              0.06        13    0.012       0      0       0.0       0.0#012  L6      1/0    8.24 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    206.7    170.7      0.53              0.23        12    0.044     55K   6344       0.0       0.0#012 Sum      1/0    8.24 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    159.7    171.6      0.69              0.30        25    0.028     55K   6344       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.3    151.0    153.8      0.39              0.17        12    0.033     31K   3150       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    206.7    170.7      0.53              0.23        12    0.044     55K   6344       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    180.2      0.15              0.06        12    0.013       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.027, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.12 GB write, 0.07 MB/s write, 0.11 GB read, 0.06 MB/s read, 0.7 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5635c5d4b8d0#2 capacity: 304.00 MB usage: 11.57 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000229 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(718,11.11 MB,3.65304%) FilterBlock(26,162.11 KB,0.0520756%) IndexBlock(26,311.61 KB,0.100101%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  1 10:20:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 112 KiB/s wr, 9 op/s
Feb  1 10:20:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:20:01 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "1db05a86-0bcd-436c-91b4-4e5f418a5b3f", "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:20:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:1db05a86-0bcd-436c-91b4-4e5f418a5b3f, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Feb  1 10:20:01 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:1db05a86-0bcd-436c-91b4-4e5f418a5b3f, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Feb  1 10:20:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 366 B/s rd, 116 KiB/s wr, 9 op/s
Feb  1 10:20:02 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "bob", "format": "json"}]: dispatch
Feb  1 10:20:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:20:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Feb  1 10:20:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb  1 10:20:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0)
Feb  1 10:20:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Feb  1 10:20:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Feb  1 10:20:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:20:02 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "bob", "format": "json"}]: dispatch
Feb  1 10:20:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:20:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb  1 10:20:02 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb  1 10:20:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb  1 10:20:02 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Feb  1 10:20:03 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a8272d77-fba1-474d-b266-1d9f610d6489", "format": "json"}]: dispatch
Feb  1 10:20:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a8272d77-fba1-474d-b266-1d9f610d6489, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:20:03 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a8272d77-fba1-474d-b266-1d9f610d6489, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:20:03 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:20:04 np0005604375 podman[248926]: 2026-02-01 15:20:04.030279377 +0000 UTC m=+0.059864461 container create 816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:20:04 np0005604375 systemd[1]: Started libpod-conmon-816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426.scope.
Feb  1 10:20:04 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:20:04 np0005604375 podman[248926]: 2026-02-01 15:20:04.093602523 +0000 UTC m=+0.123187607 container init 816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:20:04 np0005604375 podman[248926]: 2026-02-01 15:20:04.00313583 +0000 UTC m=+0.032720994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:20:04 np0005604375 podman[248926]: 2026-02-01 15:20:04.09995856 +0000 UTC m=+0.129543624 container start 816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:20:04 np0005604375 podman[248926]: 2026-02-01 15:20:04.103829198 +0000 UTC m=+0.133414262 container attach 816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  1 10:20:04 np0005604375 zealous_sanderson[248944]: 167 167
Feb  1 10:20:04 np0005604375 systemd[1]: libpod-816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426.scope: Deactivated successfully.
Feb  1 10:20:04 np0005604375 podman[248926]: 2026-02-01 15:20:04.10461101 +0000 UTC m=+0.134196064 container died 816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:20:04 np0005604375 podman[248940]: 2026-02-01 15:20:04.114200418 +0000 UTC m=+0.053383250 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  1 10:20:04 np0005604375 systemd[1]: var-lib-containers-storage-overlay-18cb70bf2e328a264ef1194edfe7bd8db452b65f0c744a515803cc7d23aa5d27-merged.mount: Deactivated successfully.
Feb  1 10:20:04 np0005604375 podman[248926]: 2026-02-01 15:20:04.137723004 +0000 UTC m=+0.167308068 container remove 816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:20:04 np0005604375 systemd[1]: libpod-conmon-816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426.scope: Deactivated successfully.
Feb  1 10:20:04 np0005604375 podman[248943]: 2026-02-01 15:20:04.14582637 +0000 UTC m=+0.084893409 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  1 10:20:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:20:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:20:04 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:20:04 np0005604375 podman[249011]: 2026-02-01 15:20:04.259261183 +0000 UTC m=+0.035219863 container create b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moser, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  1 10:20:04 np0005604375 systemd[1]: Started libpod-conmon-b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf.scope.
Feb  1 10:20:04 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:20:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b804b36fac204e698a004db950e9974ca6b4d4b6cc582439679a87b1b091f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:20:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b804b36fac204e698a004db950e9974ca6b4d4b6cc582439679a87b1b091f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:20:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b804b36fac204e698a004db950e9974ca6b4d4b6cc582439679a87b1b091f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:20:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b804b36fac204e698a004db950e9974ca6b4d4b6cc582439679a87b1b091f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:20:04 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b804b36fac204e698a004db950e9974ca6b4d4b6cc582439679a87b1b091f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:20:04 np0005604375 podman[249011]: 2026-02-01 15:20:04.246616491 +0000 UTC m=+0.022575191 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:20:04 np0005604375 podman[249011]: 2026-02-01 15:20:04.345828228 +0000 UTC m=+0.121786908 container init b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  1 10:20:04 np0005604375 podman[249011]: 2026-02-01 15:20:04.354637583 +0000 UTC m=+0.130596263 container start b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  1 10:20:04 np0005604375 podman[249011]: 2026-02-01 15:20:04.357652488 +0000 UTC m=+0.133611168 container attach b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  1 10:20:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 108 KiB/s wr, 8 op/s
Feb  1 10:20:04 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "1db05a86-0bcd-436c-91b4-4e5f418a5b3f", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:1db05a86-0bcd-436c-91b4-4e5f418a5b3f, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Feb  1 10:20:04 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:1db05a86-0bcd-436c-91b4-4e5f418a5b3f, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Feb  1 10:20:04 np0005604375 jolly_moser[249027]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:20:04 np0005604375 jolly_moser[249027]: --> All data devices are unavailable
Feb  1 10:20:04 np0005604375 systemd[1]: libpod-b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf.scope: Deactivated successfully.
Feb  1 10:20:04 np0005604375 podman[249047]: 2026-02-01 15:20:04.955502812 +0000 UTC m=+0.038750422 container died b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moser, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:20:04 np0005604375 systemd[1]: var-lib-containers-storage-overlay-08b804b36fac204e698a004db950e9974ca6b4d4b6cc582439679a87b1b091f8-merged.mount: Deactivated successfully.
Feb  1 10:20:05 np0005604375 podman[249047]: 2026-02-01 15:20:05.004013305 +0000 UTC m=+0.087260875 container remove b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  1 10:20:05 np0005604375 systemd[1]: libpod-conmon-b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf.scope: Deactivated successfully.
Feb  1 10:20:05 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "93d4f46b-9bfd-433e-b5d5-9e9b76f62d85", "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:20:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:93d4f46b-9bfd-433e-b5d5-9e9b76f62d85, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Feb  1 10:20:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:93d4f46b-9bfd-433e-b5d5-9e9b76f62d85, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Feb  1 10:20:05 np0005604375 podman[249123]: 2026-02-01 15:20:05.478776836 +0000 UTC m=+0.051353343 container create e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:20:05 np0005604375 systemd[1]: Started libpod-conmon-e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e.scope.
Feb  1 10:20:05 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:20:05 np0005604375 podman[249123]: 2026-02-01 15:20:05.456752232 +0000 UTC m=+0.029328789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:20:05 np0005604375 podman[249123]: 2026-02-01 15:20:05.554000434 +0000 UTC m=+0.126576991 container init e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:20:05 np0005604375 podman[249123]: 2026-02-01 15:20:05.562665086 +0000 UTC m=+0.135241593 container start e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:20:05 np0005604375 podman[249123]: 2026-02-01 15:20:05.566827502 +0000 UTC m=+0.139403989 container attach e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:20:05 np0005604375 dazzling_ritchie[249141]: 167 167
Feb  1 10:20:05 np0005604375 systemd[1]: libpod-e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e.scope: Deactivated successfully.
Feb  1 10:20:05 np0005604375 podman[249123]: 2026-02-01 15:20:05.570259797 +0000 UTC m=+0.142836314 container died e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  1 10:20:05 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a4a6d5b793148c5c2d28f7a1ef13d6e705678c2699dc73ee6bcc1b7efacf8280-merged.mount: Deactivated successfully.
Feb  1 10:20:05 np0005604375 podman[249123]: 2026-02-01 15:20:05.612542637 +0000 UTC m=+0.185119154 container remove e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  1 10:20:05 np0005604375 systemd[1]: libpod-conmon-e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e.scope: Deactivated successfully.
Feb  1 10:20:05 np0005604375 podman[249164]: 2026-02-01 15:20:05.818732257 +0000 UTC m=+0.063985815 container create eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:20:05 np0005604375 systemd[1]: Started libpod-conmon-eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d.scope.
Feb  1 10:20:05 np0005604375 podman[249164]: 2026-02-01 15:20:05.791124147 +0000 UTC m=+0.036377765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:20:05 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:20:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a459af9dde11064435b2664181b2b1556bf28c040e00e5b5c9a3830409c551bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:20:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a459af9dde11064435b2664181b2b1556bf28c040e00e5b5c9a3830409c551bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:20:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a459af9dde11064435b2664181b2b1556bf28c040e00e5b5c9a3830409c551bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:20:05 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a459af9dde11064435b2664181b2b1556bf28c040e00e5b5c9a3830409c551bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:20:05 np0005604375 podman[249164]: 2026-02-01 15:20:05.921517874 +0000 UTC m=+0.166771432 container init eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cartwright, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:20:05 np0005604375 podman[249164]: 2026-02-01 15:20:05.937547781 +0000 UTC m=+0.182801339 container start eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:20:05 np0005604375 podman[249164]: 2026-02-01 15:20:05.94252678 +0000 UTC m=+0.187780318 container attach eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]: {
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:    "0": [
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:        {
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "devices": [
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "/dev/loop3"
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            ],
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_name": "ceph_lv0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_size": "21470642176",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "name": "ceph_lv0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "tags": {
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.cluster_name": "ceph",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.crush_device_class": "",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.encrypted": "0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.objectstore": "bluestore",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.osd_id": "0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.type": "block",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.vdo": "0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.with_tpm": "0"
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            },
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "type": "block",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "vg_name": "ceph_vg0"
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:        }
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:    ],
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:    "1": [
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:        {
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "devices": [
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "/dev/loop4"
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            ],
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_name": "ceph_lv1",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_size": "21470642176",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "name": "ceph_lv1",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "tags": {
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.cluster_name": "ceph",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.crush_device_class": "",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.encrypted": "0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.objectstore": "bluestore",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.osd_id": "1",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.type": "block",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.vdo": "0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.with_tpm": "0"
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            },
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "type": "block",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "vg_name": "ceph_vg1"
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:        }
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:    ],
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:    "2": [
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:        {
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "devices": [
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "/dev/loop5"
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            ],
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_name": "ceph_lv2",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_size": "21470642176",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "name": "ceph_lv2",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "tags": {
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.cluster_name": "ceph",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.crush_device_class": "",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.encrypted": "0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.objectstore": "bluestore",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.osd_id": "2",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.type": "block",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.vdo": "0",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:                "ceph.with_tpm": "0"
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            },
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "type": "block",
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:            "vg_name": "ceph_vg2"
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:        }
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]:    ]
Feb  1 10:20:06 np0005604375 tender_cartwright[249180]: }
Feb  1 10:20:06 np0005604375 systemd[1]: libpod-eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d.scope: Deactivated successfully.
Feb  1 10:20:06 np0005604375 podman[249164]: 2026-02-01 15:20:06.251520567 +0000 UTC m=+0.496774155 container died eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:20:06 np0005604375 systemd[1]: var-lib-containers-storage-overlay-a459af9dde11064435b2664181b2b1556bf28c040e00e5b5c9a3830409c551bb-merged.mount: Deactivated successfully.
Feb  1 10:20:06 np0005604375 podman[249164]: 2026-02-01 15:20:06.305224275 +0000 UTC m=+0.550477833 container remove eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  1 10:20:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:20:06 np0005604375 systemd[1]: libpod-conmon-eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d.scope: Deactivated successfully.
Feb  1 10:20:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 147 KiB/s wr, 11 op/s
Feb  1 10:20:06 np0005604375 podman[249262]: 2026-02-01 15:20:06.83712764 +0000 UTC m=+0.057139725 container create b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  1 10:20:06 np0005604375 systemd[1]: Started libpod-conmon-b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4.scope.
Feb  1 10:20:06 np0005604375 podman[249262]: 2026-02-01 15:20:06.812920194 +0000 UTC m=+0.032932329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:20:06 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:20:06 np0005604375 podman[249262]: 2026-02-01 15:20:06.937845759 +0000 UTC m=+0.157857894 container init b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  1 10:20:06 np0005604375 podman[249262]: 2026-02-01 15:20:06.947774755 +0000 UTC m=+0.167786820 container start b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bhaskara, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb  1 10:20:06 np0005604375 podman[249262]: 2026-02-01 15:20:06.952684452 +0000 UTC m=+0.172696587 container attach b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bhaskara, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:20:06 np0005604375 relaxed_bhaskara[249279]: 167 167
Feb  1 10:20:06 np0005604375 systemd[1]: libpod-b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4.scope: Deactivated successfully.
Feb  1 10:20:06 np0005604375 conmon[249279]: conmon b315697aa6f218d92090 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4.scope/container/memory.events
Feb  1 10:20:06 np0005604375 podman[249262]: 2026-02-01 15:20:06.955858921 +0000 UTC m=+0.175871006 container died b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bhaskara, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb  1 10:20:06 np0005604375 systemd[1]: var-lib-containers-storage-overlay-bb998b70a45c22e79e2377a4e262ebdd8658d83b49b41e92191a079641dc85ba-merged.mount: Deactivated successfully.
Feb  1 10:20:07 np0005604375 podman[249262]: 2026-02-01 15:20:07.001396631 +0000 UTC m=+0.221408716 container remove b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  1 10:20:07 np0005604375 systemd[1]: libpod-conmon-b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4.scope: Deactivated successfully.
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "format": "json"}]: dispatch
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:20:07 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:20:07.065+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5365cf8-68f4-4bb7-b1f2-7a560b4f3280' of type subvolume
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5365cf8-68f4-4bb7-b1f2-7a560b4f3280' of type subvolume
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280'' moved to trashcan
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb  1 10:20:07 np0005604375 podman[249303]: 2026-02-01 15:20:07.172288267 +0000 UTC m=+0.048671838 container create a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Feb  1 10:20:07 np0005604375 systemd[1]: Started libpod-conmon-a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb.scope.
Feb  1 10:20:07 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:20:07 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3de69999209b0116b2a9d323490080dcb2ee6a042aa5738a6cb7f97433616/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:20:07 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3de69999209b0116b2a9d323490080dcb2ee6a042aa5738a6cb7f97433616/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:20:07 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3de69999209b0116b2a9d323490080dcb2ee6a042aa5738a6cb7f97433616/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:20:07 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3de69999209b0116b2a9d323490080dcb2ee6a042aa5738a6cb7f97433616/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:20:07 np0005604375 podman[249303]: 2026-02-01 15:20:07.148591006 +0000 UTC m=+0.024974627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:20:07 np0005604375 podman[249303]: 2026-02-01 15:20:07.260406965 +0000 UTC m=+0.136790616 container init a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  1 10:20:07 np0005604375 podman[249303]: 2026-02-01 15:20:07.274897359 +0000 UTC m=+0.151280910 container start a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wilson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:20:07 np0005604375 podman[249303]: 2026-02-01 15:20:07.277977435 +0000 UTC m=+0.154360986 container attach a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a8272d77-fba1-474d-b266-1d9f610d6489_412605f1-3f08-4d5b-b5fa-295e1cba97d5", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8272d77-fba1-474d-b266-1d9f610d6489_412605f1-3f08-4d5b-b5fa-295e1cba97d5, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8272d77-fba1-474d-b266-1d9f610d6489_412605f1-3f08-4d5b-b5fa-295e1cba97d5, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a8272d77-fba1-474d-b266-1d9f610d6489", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8272d77-fba1-474d-b266-1d9f610d6489, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb  1 10:20:07 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8272d77-fba1-474d-b266-1d9f610d6489, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:20:07.815 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:20:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:20:07.816 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:20:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:20:07.816 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:20:07 np0005604375 lvm[249398]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:20:07 np0005604375 lvm[249398]: VG ceph_vg0 finished
Feb  1 10:20:07 np0005604375 lvm[249399]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:20:07 np0005604375 lvm[249399]: VG ceph_vg1 finished
Feb  1 10:20:07 np0005604375 lvm[249401]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:20:07 np0005604375 lvm[249401]: VG ceph_vg2 finished
Feb  1 10:20:08 np0005604375 fervent_wilson[249320]: {}
Feb  1 10:20:08 np0005604375 systemd[1]: libpod-a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb.scope: Deactivated successfully.
Feb  1 10:20:08 np0005604375 systemd[1]: libpod-a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb.scope: Consumed 1.194s CPU time.
Feb  1 10:20:08 np0005604375 podman[249303]: 2026-02-01 15:20:08.061134347 +0000 UTC m=+0.937517968 container died a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  1 10:20:08 np0005604375 systemd[1]: var-lib-containers-storage-overlay-00c3de69999209b0116b2a9d323490080dcb2ee6a042aa5738a6cb7f97433616-merged.mount: Deactivated successfully.
Feb  1 10:20:08 np0005604375 podman[249303]: 2026-02-01 15:20:08.101136763 +0000 UTC m=+0.977520304 container remove a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:20:08 np0005604375 systemd[1]: libpod-conmon-a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb.scope: Deactivated successfully.
Feb  1 10:20:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:20:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:20:08 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:20:08 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:20:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:20:08 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:20:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 101 KiB/s wr, 7 op/s
Feb  1 10:20:08 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "93d4f46b-9bfd-433e-b5d5-9e9b76f62d85", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:93d4f46b-9bfd-433e-b5d5-9e9b76f62d85, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Feb  1 10:20:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:93d4f46b-9bfd-433e-b5d5-9e9b76f62d85, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Feb  1 10:20:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "56740215-53be-496a-bb36-0fdd2c1498f9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:20:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:56740215-53be-496a-bb36-0fdd2c1498f9, vol_name:cephfs) < ""
Feb  1 10:20:09 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/56740215-53be-496a-bb36-0fdd2c1498f9/7fe7cc3e-ade4-459d-8ee4-4b9d4afebbf6'.
Feb  1 10:20:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/56740215-53be-496a-bb36-0fdd2c1498f9/.meta.tmp'
Feb  1 10:20:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/56740215-53be-496a-bb36-0fdd2c1498f9/.meta.tmp' to config b'/volumes/_nogroup/56740215-53be-496a-bb36-0fdd2c1498f9/.meta'
Feb  1 10:20:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:56740215-53be-496a-bb36-0fdd2c1498f9, vol_name:cephfs) < ""
Feb  1 10:20:09 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "56740215-53be-496a-bb36-0fdd2c1498f9", "format": "json"}]: dispatch
Feb  1 10:20:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:56740215-53be-496a-bb36-0fdd2c1498f9, vol_name:cephfs) < ""
Feb  1 10:20:09 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:56740215-53be-496a-bb36-0fdd2c1498f9, vol_name:cephfs) < ""
Feb  1 10:20:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:20:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:20:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Feb  1 10:20:10 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Feb  1 10:20:10 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Feb  1 10:20:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 121 KiB/s wr, 9 op/s
Feb  1 10:20:11 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "f6dd2a46-7c0a-4607-8275-a93a5c9b55f1", "format": "json"}]: dispatch
Feb  1 10:20:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f6dd2a46-7c0a-4607-8275-a93a5c9b55f1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:11 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f6dd2a46-7c0a-4607-8275-a93a5c9b55f1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:20:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 99 KiB/s wr, 9 op/s
Feb  1 10:20:12 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "56740215-53be-496a-bb36-0fdd2c1498f9", "format": "json"}]: dispatch
Feb  1 10:20:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:56740215-53be-496a-bb36-0fdd2c1498f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:20:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:56740215-53be-496a-bb36-0fdd2c1498f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:20:12 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:20:12.836+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '56740215-53be-496a-bb36-0fdd2c1498f9' of type subvolume
Feb  1 10:20:12 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '56740215-53be-496a-bb36-0fdd2c1498f9' of type subvolume
Feb  1 10:20:12 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "56740215-53be-496a-bb36-0fdd2c1498f9", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:56740215-53be-496a-bb36-0fdd2c1498f9, vol_name:cephfs) < ""
Feb  1 10:20:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/56740215-53be-496a-bb36-0fdd2c1498f9'' moved to trashcan
Feb  1 10:20:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:20:12 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:56740215-53be-496a-bb36-0fdd2c1498f9, vol_name:cephfs) < ""
Feb  1 10:20:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 99 KiB/s wr, 9 op/s
Feb  1 10:20:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:20:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Feb  1 10:20:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Feb  1 10:20:16 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 102 KiB/s wr, 9 op/s
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "81f31f1a-09e0-4333-ae71-05dc6131f94c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, vol_name:cephfs) < ""
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/81f31f1a-09e0-4333-ae71-05dc6131f94c/7b4dae3c-af1f-4fca-9e91-24f56e0bd08e'.
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/81f31f1a-09e0-4333-ae71-05dc6131f94c/.meta.tmp'
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/81f31f1a-09e0-4333-ae71-05dc6131f94c/.meta.tmp' to config b'/volumes/_nogroup/81f31f1a-09e0-4333-ae71-05dc6131f94c/.meta'
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, vol_name:cephfs) < ""
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "81f31f1a-09e0-4333-ae71-05dc6131f94c", "format": "json"}]: dispatch
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, vol_name:cephfs) < ""
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, vol_name:cephfs) < ""
Feb  1 10:20:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:20:16 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "f6dd2a46-7c0a-4607-8275-a93a5c9b55f1_ec81534c-37e1-436f-8b77-bcabec4a8b35", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6dd2a46-7c0a-4607-8275-a93a5c9b55f1_ec81534c-37e1-436f-8b77-bcabec4a8b35, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6dd2a46-7c0a-4607-8275-a93a5c9b55f1_ec81534c-37e1-436f-8b77-bcabec4a8b35, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "f6dd2a46-7c0a-4607-8275-a93a5c9b55f1", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6dd2a46-7c0a-4607-8275-a93a5c9b55f1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb  1 10:20:16 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6dd2a46-7c0a-4607-8275-a93a5c9b55f1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:20:17
Feb  1 10:20:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:20:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:20:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta']
Feb  1 10:20:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 865 B/s rd, 98 KiB/s wr, 9 op/s
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:20:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:20:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a38fa840-3a94-4f26-a23d-fd03823471c0", "format": "json"}]: dispatch
Feb  1 10:20:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a38fa840-3a94-4f26-a23d-fd03823471c0, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a38fa840-3a94-4f26-a23d-fd03823471c0, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "81f31f1a-09e0-4333-ae71-05dc6131f94c", "format": "json"}]: dispatch
Feb  1 10:20:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:20:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:20:20 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:20:20.226+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '81f31f1a-09e0-4333-ae71-05dc6131f94c' of type subvolume
Feb  1 10:20:20 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '81f31f1a-09e0-4333-ae71-05dc6131f94c' of type subvolume
Feb  1 10:20:20 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "81f31f1a-09e0-4333-ae71-05dc6131f94c", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, vol_name:cephfs) < ""
Feb  1 10:20:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/81f31f1a-09e0-4333-ae71-05dc6131f94c'' moved to trashcan
Feb  1 10:20:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:20:20 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, vol_name:cephfs) < ""
Feb  1 10:20:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 81 KiB/s wr, 7 op/s
Feb  1 10:20:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:20:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 78 KiB/s wr, 6 op/s
Feb  1 10:20:23 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a38fa840-3a94-4f26-a23d-fd03823471c0_85344f13-853b-4a08-8ae5-5931230f8f33", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a38fa840-3a94-4f26-a23d-fd03823471c0_85344f13-853b-4a08-8ae5-5931230f8f33, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb  1 10:20:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb  1 10:20:23 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a38fa840-3a94-4f26-a23d-fd03823471c0_85344f13-853b-4a08-8ae5-5931230f8f33, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:24 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a38fa840-3a94-4f26-a23d-fd03823471c0", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a38fa840-3a94-4f26-a23d-fd03823471c0, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb  1 10:20:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb  1 10:20:24 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a38fa840-3a94-4f26-a23d-fd03823471c0, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 78 KiB/s wr, 6 op/s
Feb  1 10:20:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Feb  1 10:20:25 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Feb  1 10:20:25 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Feb  1 10:20:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:20:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 72 KiB/s wr, 7 op/s
Feb  1 10:20:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Feb  1 10:20:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Feb  1 10:20:26 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Feb  1 10:20:27 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "4d19973b-9f66-4818-a82a-a0723e2292db", "format": "json"}]: dispatch
Feb  1 10:20:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4d19973b-9f66-4818-a82a-a0723e2292db, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:27 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4d19973b-9f66-4818-a82a-a0723e2292db, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665971230985504 of space, bias 1.0, pg target 0.1997913692956512 quantized to 32 (current 32)
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005303337260864361 of space, bias 4.0, pg target 0.6364004713037233 quantized to 16 (current 16)
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:20:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 766 B/s rd, 90 KiB/s wr, 9 op/s
Feb  1 10:20:30 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "4d19973b-9f66-4818-a82a-a0723e2292db_9883d1fb-bbfe-49b8-87f6-937369add4a2", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4d19973b-9f66-4818-a82a-a0723e2292db_9883d1fb-bbfe-49b8-87f6-937369add4a2, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb  1 10:20:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb  1 10:20:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4d19973b-9f66-4818-a82a-a0723e2292db_9883d1fb-bbfe-49b8-87f6-937369add4a2, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:30 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "4d19973b-9f66-4818-a82a-a0723e2292db", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4d19973b-9f66-4818-a82a-a0723e2292db, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb  1 10:20:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb  1 10:20:30 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4d19973b-9f66-4818-a82a-a0723e2292db, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 29 KiB/s wr, 4 op/s
Feb  1 10:20:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:20:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Feb  1 10:20:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Feb  1 10:20:31 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Feb  1 10:20:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 592 B/s rd, 91 KiB/s wr, 7 op/s
Feb  1 10:20:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s wr, 2 op/s
Feb  1 10:20:34 np0005604375 podman[249441]: 2026-02-01 15:20:34.994650818 +0000 UTC m=+0.066815885 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb  1 10:20:35 np0005604375 podman[249442]: 2026-02-01 15:20:35.029272803 +0000 UTC m=+0.101597505 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb  1 10:20:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "59b862f6-24f2-457d-8400-334f4d4f6ea3", "format": "json"}]: dispatch
Feb  1 10:20:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:59b862f6-24f2-457d-8400-334f4d4f6ea3, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:35 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:59b862f6-24f2-457d-8400-334f4d4f6ea3, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:20:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 206 B/s rd, 54 KiB/s wr, 3 op/s
Feb  1 10:20:36 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:20:36.899 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  1 10:20:36 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:20:36.902 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  1 10:20:36 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:20:36.903 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  1 10:20:38 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "59b862f6-24f2-457d-8400-334f4d4f6ea3_3c99591e-4443-46aa-892b-59f2735dca00", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:59b862f6-24f2-457d-8400-334f4d4f6ea3_3c99591e-4443-46aa-892b-59f2735dca00, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb  1 10:20:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb  1 10:20:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:59b862f6-24f2-457d-8400-334f4d4f6ea3_3c99591e-4443-46aa-892b-59f2735dca00, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:38 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "59b862f6-24f2-457d-8400-334f4d4f6ea3", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:59b862f6-24f2-457d-8400-334f4d4f6ea3, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb  1 10:20:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb  1 10:20:38 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:59b862f6-24f2-457d-8400-334f4d4f6ea3, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Feb  1 10:20:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Feb  1 10:20:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Feb  1 10:20:40 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Feb  1 10:20:40 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Feb  1 10:20:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:20:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Feb  1 10:20:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Feb  1 10:20:41 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Feb  1 10:20:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 60 KiB/s wr, 4 op/s
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "57052b66-4ef2-422d-b6cb-d8da260acde1_62d967c2-993a-452f-a738-a621dc2deead", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:57052b66-4ef2-422d-b6cb-d8da260acde1_62d967c2-993a-452f-a738-a621dc2deead, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:57052b66-4ef2-422d-b6cb-d8da260acde1_62d967c2-993a-452f-a738-a621dc2deead, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "57052b66-4ef2-422d-b6cb-d8da260acde1", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:57052b66-4ef2-422d-b6cb-d8da260acde1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:57052b66-4ef2-422d-b6cb-d8da260acde1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 41 KiB/s wr, 3 op/s
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "format": "json"}]: dispatch
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd' of type subvolume
Feb  1 10:20:44 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:20:44.825+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd' of type subvolume
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "force": true, "format": "json"}]: dispatch
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd'' moved to trashcan
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:20:44 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.331739) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959246331774, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1123, "num_deletes": 261, "total_data_size": 1389898, "memory_usage": 1418272, "flush_reason": "Manual Compaction"}
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959246339487, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1374620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23900, "largest_seqno": 25022, "table_properties": {"data_size": 1369076, "index_size": 2812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13005, "raw_average_key_size": 20, "raw_value_size": 1357351, "raw_average_value_size": 2097, "num_data_blocks": 125, "num_entries": 647, "num_filter_entries": 647, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769959179, "oldest_key_time": 1769959179, "file_creation_time": 1769959246, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 7785 microseconds, and 3982 cpu microseconds.
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.339525) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1374620 bytes OK
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.339545) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.341578) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.341594) EVENT_LOG_v1 {"time_micros": 1769959246341588, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.341612) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1384380, prev total WAL file size 1384380, number of live WAL files 2.
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.342126) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1342KB)], [53(8436KB)]
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959246342168, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10013445, "oldest_snapshot_seqno": -1}
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5551 keys, 9911389 bytes, temperature: kUnknown
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959246391397, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9911389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9870828, "index_size": 25603, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 138453, "raw_average_key_size": 24, "raw_value_size": 9767764, "raw_average_value_size": 1759, "num_data_blocks": 1064, "num_entries": 5551, "num_filter_entries": 5551, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769959246, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.391660) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9911389 bytes
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.393667) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.1 rd, 201.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.2 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(14.5) write-amplify(7.2) OK, records in: 6093, records dropped: 542 output_compression: NoCompression
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.393686) EVENT_LOG_v1 {"time_micros": 1769959246393677, "job": 28, "event": "compaction_finished", "compaction_time_micros": 49297, "compaction_time_cpu_micros": 20136, "output_level": 6, "num_output_files": 1, "total_output_size": 9911389, "num_input_records": 6093, "num_output_records": 5551, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959246393933, "job": 28, "event": "table_file_deletion", "file_number": 55}
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959246395008, "job": 28, "event": "table_file_deletion", "file_number": 53}
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.342012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.395119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.395128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.395132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.395135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:20:46 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.395138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:20:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 78 KiB/s wr, 5 op/s
Feb  1 10:20:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 78 KiB/s wr, 5 op/s
Feb  1 10:20:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:20:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:20:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:20:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:20:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:20:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:20:49 np0005604375 nova_compute[238794]: 2026-02-01 15:20:49.669 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:20:50 np0005604375 nova_compute[238794]: 2026-02-01 15:20:50.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:20:50 np0005604375 nova_compute[238794]: 2026-02-01 15:20:50.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:20:50 np0005604375 nova_compute[238794]: 2026-02-01 15:20:50.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:20:50 np0005604375 nova_compute[238794]: 2026-02-01 15:20:50.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:20:50 np0005604375 nova_compute[238794]: 2026-02-01 15:20:50.336 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:20:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s wr, 3 op/s
Feb  1 10:20:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Feb  1 10:20:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Feb  1 10:20:50 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Feb  1 10:20:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:20:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1226999884' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:20:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:20:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1226999884' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:20:51 np0005604375 nova_compute[238794]: 2026-02-01 15:20:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:20:51 np0005604375 nova_compute[238794]: 2026-02-01 15:20:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:20:51 np0005604375 nova_compute[238794]: 2026-02-01 15:20:51.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:20:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:20:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Feb  1 10:20:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Feb  1 10:20:51 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Feb  1 10:20:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 62 KiB/s wr, 5 op/s
Feb  1 10:20:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 25 KiB/s wr, 2 op/s
Feb  1 10:20:55 np0005604375 nova_compute[238794]: 2026-02-01 15:20:55.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:20:55 np0005604375 nova_compute[238794]: 2026-02-01 15:20:55.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:20:56 np0005604375 nova_compute[238794]: 2026-02-01 15:20:56.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:20:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:20:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 33 KiB/s wr, 3 op/s
Feb  1 10:20:58 np0005604375 nova_compute[238794]: 2026-02-01 15:20:58.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:20:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 33 KiB/s wr, 3 op/s
Feb  1 10:20:59 np0005604375 nova_compute[238794]: 2026-02-01 15:20:59.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:20:59 np0005604375 nova_compute[238794]: 2026-02-01 15:20:59.353 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:20:59 np0005604375 nova_compute[238794]: 2026-02-01 15:20:59.354 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:20:59 np0005604375 nova_compute[238794]: 2026-02-01 15:20:59.354 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:20:59 np0005604375 nova_compute[238794]: 2026-02-01 15:20:59.354 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:20:59 np0005604375 nova_compute[238794]: 2026-02-01 15:20:59.354 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:20:59 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:20:59 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1885866586' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:20:59 np0005604375 nova_compute[238794]: 2026-02-01 15:20:59.880 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:21:00 np0005604375 nova_compute[238794]: 2026-02-01 15:21:00.052 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:21:00 np0005604375 nova_compute[238794]: 2026-02-01 15:21:00.053 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5044MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:21:00 np0005604375 nova_compute[238794]: 2026-02-01 15:21:00.053 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:21:00 np0005604375 nova_compute[238794]: 2026-02-01 15:21:00.054 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:21:00 np0005604375 nova_compute[238794]: 2026-02-01 15:21:00.131 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:21:00 np0005604375 nova_compute[238794]: 2026-02-01 15:21:00.131 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:21:00 np0005604375 nova_compute[238794]: 2026-02-01 15:21:00.149 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:21:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s wr, 1 op/s
Feb  1 10:21:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:21:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684780358' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:21:00 np0005604375 nova_compute[238794]: 2026-02-01 15:21:00.696 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:21:00 np0005604375 nova_compute[238794]: 2026-02-01 15:21:00.702 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:21:00 np0005604375 nova_compute[238794]: 2026-02-01 15:21:00.717 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:21:00 np0005604375 nova_compute[238794]: 2026-02-01 15:21:00.720 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:21:00 np0005604375 nova_compute[238794]: 2026-02-01 15:21:00.720 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:21:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:21:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Feb  1 10:21:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Feb  1 10:21:01 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Feb  1 10:21:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:21:01 np0005604375 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 8950 writes, 33K keys, 8950 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 8950 writes, 2269 syncs, 3.94 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3087 writes, 8468 keys, 3087 commit groups, 1.0 writes per commit group, ingest: 9.99 MB, 0.02 MB/s#012Interval WAL: 3087 writes, 1257 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  1 10:21:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s wr, 0 op/s
Feb  1 10:21:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s wr, 0 op/s
Feb  1 10:21:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:21:04 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 14K writes, 54K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 14K writes, 4598 syncs, 3.14 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7289 writes, 24K keys, 7289 commit groups, 1.0 writes per commit group, ingest: 34.81 MB, 0.06 MB/s#012Interval WAL: 7289 writes, 3168 syncs, 2.30 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  1 10:21:05 np0005604375 podman[249530]: 2026-02-01 15:21:05.981796225 +0000 UTC m=+0.061509326 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb  1 10:21:06 np0005604375 podman[249531]: 2026-02-01 15:21:06.061394925 +0000 UTC m=+0.138846793 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true)
Feb  1 10:21:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:21:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s wr, 0 op/s
Feb  1 10:21:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:21:07.817 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:21:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:21:07.818 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:21:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:21:07.818 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:21:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s wr, 0 op/s
Feb  1 10:21:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:21:08 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8843 writes, 32K keys, 8843 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 8843 writes, 2113 syncs, 4.19 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3112 writes, 8422 keys, 3112 commit groups, 1.0 writes per commit group, ingest: 8.08 MB, 0.01 MB/s#012Interval WAL: 3112 writes, 1189 syncs, 2.62 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:21:09 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:21:09 np0005604375 podman[249785]: 2026-02-01 15:21:09.597542808 +0000 UTC m=+0.033255128 container create ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:21:09 np0005604375 systemd[1]: Started libpod-conmon-ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a.scope.
Feb  1 10:21:09 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:21:09 np0005604375 podman[249785]: 2026-02-01 15:21:09.671328846 +0000 UTC m=+0.107041176 container init ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  1 10:21:09 np0005604375 podman[249785]: 2026-02-01 15:21:09.580942245 +0000 UTC m=+0.016654565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:21:09 np0005604375 podman[249785]: 2026-02-01 15:21:09.680256865 +0000 UTC m=+0.115969195 container start ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hopper, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  1 10:21:09 np0005604375 podman[249785]: 2026-02-01 15:21:09.683122965 +0000 UTC m=+0.118835305 container attach ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hopper, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  1 10:21:09 np0005604375 hungry_hopper[249801]: 167 167
Feb  1 10:21:09 np0005604375 systemd[1]: libpod-ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a.scope: Deactivated successfully.
Feb  1 10:21:09 np0005604375 podman[249785]: 2026-02-01 15:21:09.686428597 +0000 UTC m=+0.122140957 container died ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  1 10:21:09 np0005604375 systemd[1]: var-lib-containers-storage-overlay-4f705469dce8e61c26e80b5b04f16e31a91fa210f9c83d7490f9e735b43054f5-merged.mount: Deactivated successfully.
Feb  1 10:21:09 np0005604375 podman[249785]: 2026-02-01 15:21:09.732722858 +0000 UTC m=+0.168435178 container remove ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hopper, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:21:09 np0005604375 systemd[1]: libpod-conmon-ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a.scope: Deactivated successfully.
Feb  1 10:21:09 np0005604375 podman[249824]: 2026-02-01 15:21:09.869060311 +0000 UTC m=+0.045390367 container create a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb  1 10:21:09 np0005604375 systemd[1]: Started libpod-conmon-a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523.scope.
Feb  1 10:21:09 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:21:09 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76edc611d914ea1d600fc91a2c156ffd99eb8d65b729becbf5cea17b4167831c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:21:09 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76edc611d914ea1d600fc91a2c156ffd99eb8d65b729becbf5cea17b4167831c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:21:09 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76edc611d914ea1d600fc91a2c156ffd99eb8d65b729becbf5cea17b4167831c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:21:09 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76edc611d914ea1d600fc91a2c156ffd99eb8d65b729becbf5cea17b4167831c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:21:09 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76edc611d914ea1d600fc91a2c156ffd99eb8d65b729becbf5cea17b4167831c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:21:09 np0005604375 podman[249824]: 2026-02-01 15:21:09.848632691 +0000 UTC m=+0.024962757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:21:09 np0005604375 podman[249824]: 2026-02-01 15:21:09.967075114 +0000 UTC m=+0.143405170 container init a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_archimedes, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  1 10:21:09 np0005604375 podman[249824]: 2026-02-01 15:21:09.979730857 +0000 UTC m=+0.156060883 container start a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:21:09 np0005604375 podman[249824]: 2026-02-01 15:21:09.98341619 +0000 UTC m=+0.159746216 container attach a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_archimedes, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:21:10 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:21:10 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:21:10 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:21:10 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:21:10 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:21:10 np0005604375 kind_archimedes[249841]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:21:10 np0005604375 kind_archimedes[249841]: --> All data devices are unavailable
Feb  1 10:21:10 np0005604375 systemd[1]: libpod-a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523.scope: Deactivated successfully.
Feb  1 10:21:10 np0005604375 podman[249824]: 2026-02-01 15:21:10.455801445 +0000 UTC m=+0.632131521 container died a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_archimedes, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb  1 10:21:10 np0005604375 systemd[1]: var-lib-containers-storage-overlay-76edc611d914ea1d600fc91a2c156ffd99eb8d65b729becbf5cea17b4167831c-merged.mount: Deactivated successfully.
Feb  1 10:21:10 np0005604375 podman[249824]: 2026-02-01 15:21:10.501242611 +0000 UTC m=+0.677572637 container remove a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_archimedes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  1 10:21:10 np0005604375 systemd[1]: libpod-conmon-a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523.scope: Deactivated successfully.
Feb  1 10:21:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s wr, 0 op/s
Feb  1 10:21:10 np0005604375 podman[249937]: 2026-02-01 15:21:10.936013947 +0000 UTC m=+0.038546086 container create a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  1 10:21:10 np0005604375 systemd[1]: Started libpod-conmon-a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0.scope.
Feb  1 10:21:10 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:21:10 np0005604375 podman[249937]: 2026-02-01 15:21:10.999479157 +0000 UTC m=+0.102011286 container init a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:21:11 np0005604375 podman[249937]: 2026-02-01 15:21:11.007604364 +0000 UTC m=+0.110136533 container start a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:21:11 np0005604375 friendly_jones[249953]: 167 167
Feb  1 10:21:11 np0005604375 systemd[1]: libpod-a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0.scope: Deactivated successfully.
Feb  1 10:21:11 np0005604375 podman[249937]: 2026-02-01 15:21:11.011687898 +0000 UTC m=+0.114220187 container attach a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:21:11 np0005604375 podman[249937]: 2026-02-01 15:21:11.012762868 +0000 UTC m=+0.115295037 container died a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  1 10:21:11 np0005604375 podman[249937]: 2026-02-01 15:21:10.920155705 +0000 UTC m=+0.022687874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:21:11 np0005604375 ceph-mgr[75469]: [devicehealth INFO root] Check health
Feb  1 10:21:11 np0005604375 systemd[1]: var-lib-containers-storage-overlay-0c6a484e1c81a561034788abb0c257cac017b11f36747f4a74e5804edd0ecbeb-merged.mount: Deactivated successfully.
Feb  1 10:21:11 np0005604375 podman[249937]: 2026-02-01 15:21:11.048502395 +0000 UTC m=+0.151034524 container remove a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:21:11 np0005604375 systemd[1]: libpod-conmon-a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0.scope: Deactivated successfully.
Feb  1 10:21:11 np0005604375 podman[249976]: 2026-02-01 15:21:11.192736867 +0000 UTC m=+0.042038573 container create 38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_fermat, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 10:21:11 np0005604375 systemd[1]: Started libpod-conmon-38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd.scope.
Feb  1 10:21:11 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:21:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60116c68e36aaa75453638fcb7d09cd99653597bceee1e588252c796342f93d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:21:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60116c68e36aaa75453638fcb7d09cd99653597bceee1e588252c796342f93d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:21:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60116c68e36aaa75453638fcb7d09cd99653597bceee1e588252c796342f93d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:21:11 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60116c68e36aaa75453638fcb7d09cd99653597bceee1e588252c796342f93d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:21:11 np0005604375 podman[249976]: 2026-02-01 15:21:11.174332004 +0000 UTC m=+0.023633740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:21:11 np0005604375 podman[249976]: 2026-02-01 15:21:11.271938976 +0000 UTC m=+0.121240702 container init 38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 10:21:11 np0005604375 podman[249976]: 2026-02-01 15:21:11.277658856 +0000 UTC m=+0.126960562 container start 38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_fermat, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:21:11 np0005604375 podman[249976]: 2026-02-01 15:21:11.281343399 +0000 UTC m=+0.130645115 container attach 38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  1 10:21:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]: {
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:    "0": [
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:        {
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "devices": [
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "/dev/loop3"
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            ],
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_name": "ceph_lv0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_size": "21470642176",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "name": "ceph_lv0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "tags": {
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.cluster_name": "ceph",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.crush_device_class": "",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.encrypted": "0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.objectstore": "bluestore",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.osd_id": "0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.type": "block",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.vdo": "0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.with_tpm": "0"
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            },
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "type": "block",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "vg_name": "ceph_vg0"
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:        }
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:    ],
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:    "1": [
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:        {
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "devices": [
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "/dev/loop4"
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            ],
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_name": "ceph_lv1",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_size": "21470642176",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "name": "ceph_lv1",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "tags": {
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.cluster_name": "ceph",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.crush_device_class": "",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.encrypted": "0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.objectstore": "bluestore",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.osd_id": "1",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.type": "block",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.vdo": "0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.with_tpm": "0"
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            },
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "type": "block",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "vg_name": "ceph_vg1"
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:        }
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:    ],
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:    "2": [
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:        {
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "devices": [
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "/dev/loop5"
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            ],
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_name": "ceph_lv2",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_size": "21470642176",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "name": "ceph_lv2",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "tags": {
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.cluster_name": "ceph",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.crush_device_class": "",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.encrypted": "0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.objectstore": "bluestore",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.osd_id": "2",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.type": "block",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.vdo": "0",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:                "ceph.with_tpm": "0"
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            },
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "type": "block",
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:            "vg_name": "ceph_vg2"
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:        }
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]:    ]
Feb  1 10:21:11 np0005604375 peaceful_fermat[249992]: }
Feb  1 10:21:11 np0005604375 systemd[1]: libpod-38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd.scope: Deactivated successfully.
Feb  1 10:21:11 np0005604375 podman[249976]: 2026-02-01 15:21:11.574808993 +0000 UTC m=+0.424110729 container died 38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_fermat, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  1 10:21:11 np0005604375 systemd[1]: var-lib-containers-storage-overlay-d60116c68e36aaa75453638fcb7d09cd99653597bceee1e588252c796342f93d-merged.mount: Deactivated successfully.
Feb  1 10:21:11 np0005604375 podman[249976]: 2026-02-01 15:21:11.629468008 +0000 UTC m=+0.478769744 container remove 38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_fermat, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Feb  1 10:21:11 np0005604375 systemd[1]: libpod-conmon-38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd.scope: Deactivated successfully.
Feb  1 10:21:12 np0005604375 podman[250075]: 2026-02-01 15:21:12.105621538 +0000 UTC m=+0.050792838 container create 663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:21:12 np0005604375 systemd[1]: Started libpod-conmon-663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7.scope.
Feb  1 10:21:12 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:21:12 np0005604375 podman[250075]: 2026-02-01 15:21:12.079467398 +0000 UTC m=+0.024638728 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:21:12 np0005604375 podman[250075]: 2026-02-01 15:21:12.17416206 +0000 UTC m=+0.119333390 container init 663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:21:12 np0005604375 podman[250075]: 2026-02-01 15:21:12.180806565 +0000 UTC m=+0.125977815 container start 663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:21:12 np0005604375 modest_greider[250092]: 167 167
Feb  1 10:21:12 np0005604375 systemd[1]: libpod-663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7.scope: Deactivated successfully.
Feb  1 10:21:12 np0005604375 conmon[250092]: conmon 663c6552b6544e8f2e78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7.scope/container/memory.events
Feb  1 10:21:12 np0005604375 podman[250075]: 2026-02-01 15:21:12.186350329 +0000 UTC m=+0.131521679 container attach 663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:21:12 np0005604375 podman[250075]: 2026-02-01 15:21:12.186763051 +0000 UTC m=+0.131934331 container died 663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  1 10:21:12 np0005604375 systemd[1]: var-lib-containers-storage-overlay-14e8ced847417b856ff804617db75e046b9227eee190c037fbfa1b62d8c46dff-merged.mount: Deactivated successfully.
Feb  1 10:21:12 np0005604375 podman[250075]: 2026-02-01 15:21:12.227861687 +0000 UTC m=+0.173032937 container remove 663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:21:12 np0005604375 systemd[1]: libpod-conmon-663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7.scope: Deactivated successfully.
Feb  1 10:21:12 np0005604375 podman[250115]: 2026-02-01 15:21:12.406496169 +0000 UTC m=+0.059283854 container create 6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  1 10:21:12 np0005604375 systemd[1]: Started libpod-conmon-6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c.scope.
Feb  1 10:21:12 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:21:12 np0005604375 podman[250115]: 2026-02-01 15:21:12.382236583 +0000 UTC m=+0.035024358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:21:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80111781ae703b2c44f7590e8e8693ead6f1646a4760d169b3956904011f4358/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:21:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80111781ae703b2c44f7590e8e8693ead6f1646a4760d169b3956904011f4358/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:21:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80111781ae703b2c44f7590e8e8693ead6f1646a4760d169b3956904011f4358/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:21:12 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80111781ae703b2c44f7590e8e8693ead6f1646a4760d169b3956904011f4358/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:21:12 np0005604375 podman[250115]: 2026-02-01 15:21:12.507713342 +0000 UTC m=+0.160501067 container init 6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  1 10:21:12 np0005604375 podman[250115]: 2026-02-01 15:21:12.51622024 +0000 UTC m=+0.169007925 container start 6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Feb  1 10:21:12 np0005604375 podman[250115]: 2026-02-01 15:21:12.519259994 +0000 UTC m=+0.172047709 container attach 6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_galois, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:21:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:13 np0005604375 lvm[250211]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:21:13 np0005604375 lvm[250211]: VG ceph_vg0 finished
Feb  1 10:21:13 np0005604375 lvm[250212]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:21:13 np0005604375 lvm[250212]: VG ceph_vg1 finished
Feb  1 10:21:13 np0005604375 lvm[250214]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:21:13 np0005604375 lvm[250214]: VG ceph_vg2 finished
Feb  1 10:21:13 np0005604375 trusting_galois[250132]: {}
Feb  1 10:21:13 np0005604375 systemd[1]: libpod-6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c.scope: Deactivated successfully.
Feb  1 10:21:13 np0005604375 podman[250217]: 2026-02-01 15:21:13.303399774 +0000 UTC m=+0.033564857 container died 6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  1 10:21:13 np0005604375 systemd[1]: var-lib-containers-storage-overlay-80111781ae703b2c44f7590e8e8693ead6f1646a4760d169b3956904011f4358-merged.mount: Deactivated successfully.
Feb  1 10:21:13 np0005604375 podman[250217]: 2026-02-01 15:21:13.336758314 +0000 UTC m=+0.066923387 container remove 6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_galois, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  1 10:21:13 np0005604375 systemd[1]: libpod-conmon-6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c.scope: Deactivated successfully.
Feb  1 10:21:13 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:21:13 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:21:13 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:21:13 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:21:14 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:21:14 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:21:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:21:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:21:17
Feb  1 10:21:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:21:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:21:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'images', 'backups', 'vms']
Feb  1 10:21:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:21:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:21:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:21:21 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:21:21.598 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  1 10:21:21 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:21:21.600 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  1 10:21:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:24 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:21:24.603 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  1 10:21:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:21:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659717898882094 of space, bias 1.0, pg target 0.19979153696646282 quantized to 32 (current 32)
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005527671647039236 of space, bias 4.0, pg target 0.6633205976447083 quantized to 16 (current 16)
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:21:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:21:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:21:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:37 np0005604375 podman[250257]: 2026-02-01 15:21:37.005996537 +0000 UTC m=+0.079297672 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:21:37 np0005604375 podman[250258]: 2026-02-01 15:21:37.045984302 +0000 UTC m=+0.117732344 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb  1 10:21:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:21:39 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:21:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb  1 10:21:39 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/6f3fb38f-4fb5-428d-af1f-466faa7d1587'.
Feb  1 10:21:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta.tmp'
Feb  1 10:21:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta.tmp' to config b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta'
Feb  1 10:21:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb  1 10:21:39 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "format": "json"}]: dispatch
Feb  1 10:21:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb  1 10:21:39 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb  1 10:21:39 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:21:39 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:21:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Feb  1 10:21:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:21:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Feb  1 10:21:43 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "snap_name": "0e53ce9d-659d-4efa-bf51-1e666e409ac3", "format": "json"}]: dispatch
Feb  1 10:21:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0e53ce9d-659d-4efa-bf51-1e666e409ac3, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb  1 10:21:43 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0e53ce9d-659d-4efa-bf51-1e666e409ac3, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb  1 10:21:44 np0005604375 nova_compute[238794]: 2026-02-01 15:21:44.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:21:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Feb  1 10:21:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:21:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s wr, 1 op/s
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "snap_name": "0e53ce9d-659d-4efa-bf51-1e666e409ac3_a8b1ef42-1e25-4f11-8838-77f94c29ebe4", "force": true, "format": "json"}]: dispatch
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0e53ce9d-659d-4efa-bf51-1e666e409ac3_a8b1ef42-1e25-4f11-8838-77f94c29ebe4, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta.tmp'
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta.tmp' to config b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta'
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0e53ce9d-659d-4efa-bf51-1e666e409ac3_a8b1ef42-1e25-4f11-8838-77f94c29ebe4, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "snap_name": "0e53ce9d-659d-4efa-bf51-1e666e409ac3", "force": true, "format": "json"}]: dispatch
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0e53ce9d-659d-4efa-bf51-1e666e409ac3, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta.tmp'
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta.tmp' to config b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta'
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0e53ce9d-659d-4efa-bf51-1e666e409ac3, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s wr, 1 op/s
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8299156f70>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825b5e6d90>)]
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb  1 10:21:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb  1 10:21:50 np0005604375 nova_compute[238794]: 2026-02-01 15:21:50.335 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:21:50 np0005604375 nova_compute[238794]: 2026-02-01 15:21:50.335 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:21:50 np0005604375 nova_compute[238794]: 2026-02-01 15:21:50.336 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:21:50 np0005604375 nova_compute[238794]: 2026-02-01 15:21:50.350 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:21:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 33 KiB/s wr, 2 op/s
Feb  1 10:21:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Feb  1 10:21:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Feb  1 10:21:50 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Feb  1 10:21:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:21:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3227912831' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:21:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:21:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3227912831' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:21:51 np0005604375 nova_compute[238794]: 2026-02-01 15:21:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:21:51 np0005604375 nova_compute[238794]: 2026-02-01 15:21:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:21:51 np0005604375 nova_compute[238794]: 2026-02-01 15:21:51.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:21:51 np0005604375 nova_compute[238794]: 2026-02-01 15:21:51.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:21:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "091d85e3-6421-421c-a022-3095345db8aa", "format": "json"}]: dispatch
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:091d85e3-6421-421c-a022-3095345db8aa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:091d85e3-6421-421c-a022-3095345db8aa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:21:51 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.657+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '091d85e3-6421-421c-a022-3095345db8aa' of type subvolume
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '091d85e3-6421-421c-a022-3095345db8aa' of type subvolume
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "force": true, "format": "json"}]: dispatch
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa'' moved to trashcan
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb  1 10:21:51 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.681+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.681+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.681+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.681+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.681+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.715+0000 7f8268f85640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.715+0000 7f8268f85640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.715+0000 7f8268f85640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.715+0000 7f8268f85640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.715+0000 7f8268f85640 -1 client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb  1 10:21:51 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.viosrg(active, since 31m)
Feb  1 10:21:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 42 KiB/s wr, 2 op/s
Feb  1 10:21:53 np0005604375 nova_compute[238794]: 2026-02-01 15:21:53.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:21:53 np0005604375 nova_compute[238794]: 2026-02-01 15:21:53.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  1 10:21:53 np0005604375 nova_compute[238794]: 2026-02-01 15:21:53.334 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  1 10:21:53 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.viosrg(active, since 31m)
Feb  1 10:21:54 np0005604375 nova_compute[238794]: 2026-02-01 15:21:54.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:21:54 np0005604375 nova_compute[238794]: 2026-02-01 15:21:54.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  1 10:21:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 42 KiB/s wr, 2 op/s
Feb  1 10:21:55 np0005604375 nova_compute[238794]: 2026-02-01 15:21:55.339 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:21:55 np0005604375 nova_compute[238794]: 2026-02-01 15:21:55.339 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:21:56 np0005604375 nova_compute[238794]: 2026-02-01 15:21:56.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:21:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:21:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 51 KiB/s wr, 4 op/s
Feb  1 10:21:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 51 KiB/s wr, 4 op/s
Feb  1 10:22:00 np0005604375 nova_compute[238794]: 2026-02-01 15:22:00.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:22:00 np0005604375 nova_compute[238794]: 2026-02-01 15:22:00.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:22:00 np0005604375 nova_compute[238794]: 2026-02-01 15:22:00.366 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:22:00 np0005604375 nova_compute[238794]: 2026-02-01 15:22:00.367 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:22:00 np0005604375 nova_compute[238794]: 2026-02-01 15:22:00.368 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:22:00 np0005604375 nova_compute[238794]: 2026-02-01 15:22:00.368 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:22:00 np0005604375 nova_compute[238794]: 2026-02-01 15:22:00.369 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:22:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 51 KiB/s wr, 33 op/s
Feb  1 10:22:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:22:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2068617142' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:22:00 np0005604375 nova_compute[238794]: 2026-02-01 15:22:00.900 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:22:01 np0005604375 nova_compute[238794]: 2026-02-01 15:22:01.064 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:22:01 np0005604375 nova_compute[238794]: 2026-02-01 15:22:01.065 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5041MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:22:01 np0005604375 nova_compute[238794]: 2026-02-01 15:22:01.066 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:22:01 np0005604375 nova_compute[238794]: 2026-02-01 15:22:01.066 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:22:01 np0005604375 nova_compute[238794]: 2026-02-01 15:22:01.325 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:22:01 np0005604375 nova_compute[238794]: 2026-02-01 15:22:01.325 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:22:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:22:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Feb  1 10:22:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Feb  1 10:22:01 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Feb  1 10:22:01 np0005604375 nova_compute[238794]: 2026-02-01 15:22:01.388 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing inventories for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  1 10:22:01 np0005604375 nova_compute[238794]: 2026-02-01 15:22:01.523 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Updating ProviderTree inventory for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  1 10:22:01 np0005604375 nova_compute[238794]: 2026-02-01 15:22:01.524 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Updating inventory in ProviderTree for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  1 10:22:01 np0005604375 nova_compute[238794]: 2026-02-01 15:22:01.536 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing aggregate associations for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  1 10:22:01 np0005604375 nova_compute[238794]: 2026-02-01 15:22:01.556 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing trait associations for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18, traits: COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  1 10:22:01 np0005604375 nova_compute[238794]: 2026-02-01 15:22:01.572 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:22:02 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:22:02 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/937054747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:22:02 np0005604375 nova_compute[238794]: 2026-02-01 15:22:02.120 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:22:02 np0005604375 nova_compute[238794]: 2026-02-01 15:22:02.127 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:22:02 np0005604375 nova_compute[238794]: 2026-02-01 15:22:02.145 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:22:02 np0005604375 nova_compute[238794]: 2026-02-01 15:22:02.148 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:22:02 np0005604375 nova_compute[238794]: 2026-02-01 15:22:02.148 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:22:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 46 KiB/s wr, 93 op/s
Feb  1 10:22:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 46 KiB/s wr, 93 op/s
Feb  1 10:22:05 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb  1 10:22:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb  1 10:22:05 np0005604375 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/efb0581a-17af-495b-a4b5-cac17d7af446'.
Feb  1 10:22:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta.tmp'
Feb  1 10:22:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta.tmp' to config b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta'
Feb  1 10:22:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb  1 10:22:05 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "format": "json"}]: dispatch
Feb  1 10:22:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb  1 10:22:05 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb  1 10:22:05 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  1 10:22:05 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  1 10:22:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:22:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 24 KiB/s wr, 91 op/s
Feb  1 10:22:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:22:07.818 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:22:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:22:07.819 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:22:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:22:07.819 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:22:07 np0005604375 podman[250366]: 2026-02-01 15:22:07.976374446 +0000 UTC m=+0.062331248 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb  1 10:22:07 np0005604375 podman[250367]: 2026-02-01 15:22:07.98615154 +0000 UTC m=+0.075061064 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb  1 10:22:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 24 KiB/s wr, 91 op/s
Feb  1 10:22:08 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "snap_name": "c6222879-ed29-4cfb-9aea-5793593bdf51", "format": "json"}]: dispatch
Feb  1 10:22:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c6222879-ed29-4cfb-9aea-5793593bdf51, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb  1 10:22:08 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c6222879-ed29-4cfb-9aea-5793593bdf51, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb  1 10:22:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 24 KiB/s wr, 62 op/s
Feb  1 10:22:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:22:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Feb  1 10:22:13 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "snap_name": "c6222879-ed29-4cfb-9aea-5793593bdf51_b24e846b-f29d-418f-a067-565f2a42532d", "force": true, "format": "json"}]: dispatch
Feb  1 10:22:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c6222879-ed29-4cfb-9aea-5793593bdf51_b24e846b-f29d-418f-a067-565f2a42532d, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb  1 10:22:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta.tmp'
Feb  1 10:22:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta.tmp' to config b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta'
Feb  1 10:22:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c6222879-ed29-4cfb-9aea-5793593bdf51_b24e846b-f29d-418f-a067-565f2a42532d, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb  1 10:22:13 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "snap_name": "c6222879-ed29-4cfb-9aea-5793593bdf51", "force": true, "format": "json"}]: dispatch
Feb  1 10:22:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c6222879-ed29-4cfb-9aea-5793593bdf51, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb  1 10:22:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta.tmp'
Feb  1 10:22:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta.tmp' to config b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta'
Feb  1 10:22:13 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c6222879-ed29-4cfb-9aea-5793593bdf51, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb  1 10:22:13 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:22:13 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:13 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:22:13 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:13 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Feb  1 10:22:14 np0005604375 podman[250624]: 2026-02-01 15:22:14.84048969 +0000 UTC m=+0.062419071 container create dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  1 10:22:14 np0005604375 systemd[1]: Started libpod-conmon-dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca.scope.
Feb  1 10:22:14 np0005604375 podman[250624]: 2026-02-01 15:22:14.81017818 +0000 UTC m=+0.032107661 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:22:14 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:22:14 np0005604375 podman[250624]: 2026-02-01 15:22:14.923766232 +0000 UTC m=+0.145695643 container init dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:22:14 np0005604375 podman[250624]: 2026-02-01 15:22:14.930201623 +0000 UTC m=+0.152130994 container start dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  1 10:22:14 np0005604375 podman[250624]: 2026-02-01 15:22:14.933339921 +0000 UTC m=+0.155269352 container attach dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  1 10:22:14 np0005604375 strange_galileo[250640]: 167 167
Feb  1 10:22:14 np0005604375 systemd[1]: libpod-dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca.scope: Deactivated successfully.
Feb  1 10:22:14 np0005604375 podman[250624]: 2026-02-01 15:22:14.935715117 +0000 UTC m=+0.157644488 container died dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Feb  1 10:22:14 np0005604375 systemd[1]: var-lib-containers-storage-overlay-0e942142c467b6a05f0aa769147f68a15571c6eeb92742dd9b4ecac61543976a-merged.mount: Deactivated successfully.
Feb  1 10:22:14 np0005604375 podman[250624]: 2026-02-01 15:22:14.971698166 +0000 UTC m=+0.193627537 container remove dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  1 10:22:14 np0005604375 systemd[1]: libpod-conmon-dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca.scope: Deactivated successfully.
Feb  1 10:22:15 np0005604375 podman[250663]: 2026-02-01 15:22:15.097587664 +0000 UTC m=+0.036747701 container create 2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  1 10:22:15 np0005604375 systemd[1]: Started libpod-conmon-2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4.scope.
Feb  1 10:22:15 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:22:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013956c7d7417baedb6e51fbc755bd72bb36afb286658d5e2af1d57a31289f47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013956c7d7417baedb6e51fbc755bd72bb36afb286658d5e2af1d57a31289f47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013956c7d7417baedb6e51fbc755bd72bb36afb286658d5e2af1d57a31289f47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:15 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013956c7d7417baedb6e51fbc755bd72bb36afb286658d5e2af1d57a31289f47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:15 np0005604375 podman[250663]: 2026-02-01 15:22:15.082800959 +0000 UTC m=+0.021960996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:22:15 np0005604375 podman[250663]: 2026-02-01 15:22:15.181766033 +0000 UTC m=+0.120926060 container init 2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lumiere, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:22:15 np0005604375 podman[250663]: 2026-02-01 15:22:15.188580464 +0000 UTC m=+0.127740491 container start 2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lumiere, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  1 10:22:15 np0005604375 podman[250663]: 2026-02-01 15:22:15.191743262 +0000 UTC m=+0.130903279 container attach 2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]: [
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:    {
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:        "available": false,
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:        "being_replaced": false,
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:        "ceph_device_lvm": false,
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:        "lsm_data": {},
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:        "lvs": [],
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:        "path": "/dev/sr0",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:        "rejected_reasons": [
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "Has a FileSystem",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "Insufficient space (<5GB)"
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:        ],
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:        "sys_api": {
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "actuators": null,
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "device_nodes": [
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:                "sr0"
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            ],
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "devname": "sr0",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "human_readable_size": "482.00 KB",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "id_bus": "ata",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "model": "QEMU DVD-ROM",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "nr_requests": "2",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "parent": "/dev/sr0",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "partitions": {},
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "path": "/dev/sr0",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "removable": "1",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "rev": "2.5+",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "ro": "0",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "rotational": "1",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "sas_address": "",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "sas_device_handle": "",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "scheduler_mode": "mq-deadline",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "sectors": 0,
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "sectorsize": "2048",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "size": 493568.0,
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "support_discard": "2048",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "type": "disk",
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:            "vendor": "QEMU"
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:        }
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]:    }
Feb  1 10:22:15 np0005604375 peaceful_lumiere[250680]: ]
Feb  1 10:22:15 np0005604375 systemd[1]: libpod-2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4.scope: Deactivated successfully.
Feb  1 10:22:15 np0005604375 podman[250663]: 2026-02-01 15:22:15.700724056 +0000 UTC m=+0.639884103 container died 2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:22:15 np0005604375 systemd[1]: var-lib-containers-storage-overlay-013956c7d7417baedb6e51fbc755bd72bb36afb286658d5e2af1d57a31289f47-merged.mount: Deactivated successfully.
Feb  1 10:22:15 np0005604375 podman[250663]: 2026-02-01 15:22:15.746550131 +0000 UTC m=+0.685710178 container remove 2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lumiere, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:22:15 np0005604375 systemd[1]: libpod-conmon-2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4.scope: Deactivated successfully.
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:15 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:22:16 np0005604375 podman[251552]: 2026-02-01 15:22:16.241819701 +0000 UTC m=+0.055537068 container create 217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:22:16 np0005604375 systemd[1]: Started libpod-conmon-217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158.scope.
Feb  1 10:22:16 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:22:16 np0005604375 podman[251552]: 2026-02-01 15:22:16.305192917 +0000 UTC m=+0.118910264 container init 217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_germain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  1 10:22:16 np0005604375 podman[251552]: 2026-02-01 15:22:16.312850131 +0000 UTC m=+0.126567498 container start 217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:22:16 np0005604375 podman[251552]: 2026-02-01 15:22:16.221481981 +0000 UTC m=+0.035199398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:22:16 np0005604375 goofy_germain[251567]: 167 167
Feb  1 10:22:16 np0005604375 systemd[1]: libpod-217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158.scope: Deactivated successfully.
Feb  1 10:22:16 np0005604375 podman[251552]: 2026-02-01 15:22:16.31675148 +0000 UTC m=+0.130468857 container attach 217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_germain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Feb  1 10:22:16 np0005604375 podman[251552]: 2026-02-01 15:22:16.317415569 +0000 UTC m=+0.131132936 container died 217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_germain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  1 10:22:16 np0005604375 systemd[1]: var-lib-containers-storage-overlay-bf7d3188bcba2e44c2d8c9312bacac6826518d4e85ddc203e753e6469cda3a3c-merged.mount: Deactivated successfully.
Feb  1 10:22:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:22:16 np0005604375 podman[251552]: 2026-02-01 15:22:16.356963587 +0000 UTC m=+0.170680934 container remove 217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_germain, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  1 10:22:16 np0005604375 systemd[1]: libpod-conmon-217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158.scope: Deactivated successfully.
Feb  1 10:22:16 np0005604375 podman[251592]: 2026-02-01 15:22:16.525962154 +0000 UTC m=+0.052453721 container create 8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  1 10:22:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s wr, 2 op/s
Feb  1 10:22:16 np0005604375 systemd[1]: Started libpod-conmon-8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70.scope.
Feb  1 10:22:16 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:22:16 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4ba7ede35dd4241b49e8996f2fe1b56eab8fe5198b68893f4cac714937d33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:16 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4ba7ede35dd4241b49e8996f2fe1b56eab8fe5198b68893f4cac714937d33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:16 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4ba7ede35dd4241b49e8996f2fe1b56eab8fe5198b68893f4cac714937d33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:16 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4ba7ede35dd4241b49e8996f2fe1b56eab8fe5198b68893f4cac714937d33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:16 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4ba7ede35dd4241b49e8996f2fe1b56eab8fe5198b68893f4cac714937d33/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:16 np0005604375 podman[251592]: 2026-02-01 15:22:16.595573454 +0000 UTC m=+0.122065071 container init 8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_moser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Feb  1 10:22:16 np0005604375 podman[251592]: 2026-02-01 15:22:16.506079206 +0000 UTC m=+0.032570863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:22:16 np0005604375 podman[251592]: 2026-02-01 15:22:16.607513829 +0000 UTC m=+0.134005426 container start 8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  1 10:22:16 np0005604375 podman[251592]: 2026-02-01 15:22:16.611145121 +0000 UTC m=+0.137636718 container attach 8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:22:16 np0005604375 reverent_moser[251609]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:22:16 np0005604375 reverent_moser[251609]: --> All data devices are unavailable
Feb  1 10:22:17 np0005604375 systemd[1]: libpod-8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70.scope: Deactivated successfully.
Feb  1 10:22:17 np0005604375 conmon[251609]: conmon 8260a21daaf55f7095a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70.scope/container/memory.events
Feb  1 10:22:17 np0005604375 podman[251592]: 2026-02-01 15:22:17.010942715 +0000 UTC m=+0.537434302 container died 8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_moser, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:22:17 np0005604375 systemd[1]: var-lib-containers-storage-overlay-23a4ba7ede35dd4241b49e8996f2fe1b56eab8fe5198b68893f4cac714937d33-merged.mount: Deactivated successfully.
Feb  1 10:22:17 np0005604375 podman[251592]: 2026-02-01 15:22:17.05893984 +0000 UTC m=+0.585431427 container remove 8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:22:17 np0005604375 systemd[1]: libpod-conmon-8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70.scope: Deactivated successfully.
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "format": "json"}]: dispatch
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e03e65cf-03e2-407f-9515-a854a7393b45, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e03e65cf-03e2-407f-9515-a854a7393b45, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb  1 10:22:17 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:22:17.184+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e03e65cf-03e2-407f-9515-a854a7393b45' of type subvolume
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e03e65cf-03e2-407f-9515-a854a7393b45' of type subvolume
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "force": true, "format": "json"}]: dispatch
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45'' moved to trashcan
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb  1 10:22:17 np0005604375 podman[251703]: 2026-02-01 15:22:17.521960806 +0000 UTC m=+0.054203400 container create 5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  1 10:22:17 np0005604375 systemd[1]: Started libpod-conmon-5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68.scope.
Feb  1 10:22:17 np0005604375 podman[251703]: 2026-02-01 15:22:17.494738093 +0000 UTC m=+0.026980757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:22:17 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:22:17 np0005604375 podman[251703]: 2026-02-01 15:22:17.604014096 +0000 UTC m=+0.136256700 container init 5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  1 10:22:17 np0005604375 podman[251703]: 2026-02-01 15:22:17.612870114 +0000 UTC m=+0.145112688 container start 5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:22:17 np0005604375 optimistic_vaughan[251719]: 167 167
Feb  1 10:22:17 np0005604375 systemd[1]: libpod-5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68.scope: Deactivated successfully.
Feb  1 10:22:17 np0005604375 podman[251703]: 2026-02-01 15:22:17.617810372 +0000 UTC m=+0.150052946 container attach 5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_vaughan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  1 10:22:17 np0005604375 podman[251703]: 2026-02-01 15:22:17.618559533 +0000 UTC m=+0.150802127 container died 5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  1 10:22:17 np0005604375 systemd[1]: var-lib-containers-storage-overlay-f1e86ef6352f8d75c5a2bae55400e15a2c4a9ba87b8d28d35c7650dc9f7253aa-merged.mount: Deactivated successfully.
Feb  1 10:22:17 np0005604375 podman[251703]: 2026-02-01 15:22:17.664076709 +0000 UTC m=+0.196319303 container remove 5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_vaughan, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  1 10:22:17 np0005604375 systemd[1]: libpod-conmon-5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68.scope: Deactivated successfully.
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:22:17
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', '.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'images', 'backups', 'default.rgw.log']
Feb  1 10:22:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:22:17 np0005604375 podman[251743]: 2026-02-01 15:22:17.854383282 +0000 UTC m=+0.059235721 container create d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_feistel, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  1 10:22:17 np0005604375 systemd[1]: Started libpod-conmon-d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833.scope.
Feb  1 10:22:17 np0005604375 podman[251743]: 2026-02-01 15:22:17.833081635 +0000 UTC m=+0.037934074 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:22:17 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:22:17 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52acf193c2a9c1b13a03f2307cad12e42025f51571f2562b72d15ae3d8de20ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:17 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52acf193c2a9c1b13a03f2307cad12e42025f51571f2562b72d15ae3d8de20ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:17 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52acf193c2a9c1b13a03f2307cad12e42025f51571f2562b72d15ae3d8de20ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:17 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52acf193c2a9c1b13a03f2307cad12e42025f51571f2562b72d15ae3d8de20ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:17 np0005604375 podman[251743]: 2026-02-01 15:22:17.946109503 +0000 UTC m=+0.150961942 container init d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  1 10:22:17 np0005604375 podman[251743]: 2026-02-01 15:22:17.951999688 +0000 UTC m=+0.156852127 container start d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  1 10:22:17 np0005604375 podman[251743]: 2026-02-01 15:22:17.955127936 +0000 UTC m=+0.159980355 container attach d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_feistel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]: {
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:    "0": [
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:        {
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "devices": [
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "/dev/loop3"
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            ],
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_name": "ceph_lv0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_size": "21470642176",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "name": "ceph_lv0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "tags": {
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.cluster_name": "ceph",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.crush_device_class": "",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.encrypted": "0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.objectstore": "bluestore",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.osd_id": "0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.type": "block",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.vdo": "0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.with_tpm": "0"
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            },
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "type": "block",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "vg_name": "ceph_vg0"
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:        }
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:    ],
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:    "1": [
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:        {
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "devices": [
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "/dev/loop4"
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            ],
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_name": "ceph_lv1",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_size": "21470642176",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "name": "ceph_lv1",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "tags": {
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.cluster_name": "ceph",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.crush_device_class": "",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.encrypted": "0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.objectstore": "bluestore",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.osd_id": "1",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.type": "block",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.vdo": "0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.with_tpm": "0"
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            },
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "type": "block",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "vg_name": "ceph_vg1"
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:        }
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:    ],
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:    "2": [
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:        {
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "devices": [
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "/dev/loop5"
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            ],
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_name": "ceph_lv2",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_size": "21470642176",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "name": "ceph_lv2",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "tags": {
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.cluster_name": "ceph",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.crush_device_class": "",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.encrypted": "0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.objectstore": "bluestore",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.osd_id": "2",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.type": "block",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.vdo": "0",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:                "ceph.with_tpm": "0"
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            },
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "type": "block",
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:            "vg_name": "ceph_vg2"
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:        }
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]:    ]
Feb  1 10:22:18 np0005604375 reverent_feistel[251759]: }
Feb  1 10:22:18 np0005604375 systemd[1]: libpod-d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833.scope: Deactivated successfully.
Feb  1 10:22:18 np0005604375 podman[251743]: 2026-02-01 15:22:18.235634307 +0000 UTC m=+0.440486726 container died d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_feistel, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  1 10:22:18 np0005604375 systemd[1]: var-lib-containers-storage-overlay-52acf193c2a9c1b13a03f2307cad12e42025f51571f2562b72d15ae3d8de20ba-merged.mount: Deactivated successfully.
Feb  1 10:22:18 np0005604375 podman[251743]: 2026-02-01 15:22:18.279117745 +0000 UTC m=+0.483970164 container remove d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:22:18 np0005604375 systemd[1]: libpod-conmon-d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833.scope: Deactivated successfully.
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s wr, 2 op/s
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:22:18 np0005604375 podman[251843]: 2026-02-01 15:22:18.698256011 +0000 UTC m=+0.039251771 container create 5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gates, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:22:18 np0005604375 systemd[1]: Started libpod-conmon-5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0.scope.
Feb  1 10:22:18 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:22:18 np0005604375 podman[251843]: 2026-02-01 15:22:18.769575929 +0000 UTC m=+0.110571659 container init 5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:22:18 np0005604375 podman[251843]: 2026-02-01 15:22:18.676907022 +0000 UTC m=+0.017902842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:22:18 np0005604375 podman[251843]: 2026-02-01 15:22:18.777592554 +0000 UTC m=+0.118588324 container start 5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gates, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:22:18 np0005604375 youthful_gates[251859]: 167 167
Feb  1 10:22:18 np0005604375 systemd[1]: libpod-5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0.scope: Deactivated successfully.
Feb  1 10:22:18 np0005604375 podman[251843]: 2026-02-01 15:22:18.78172712 +0000 UTC m=+0.122722890 container attach 5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gates, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:22:18 np0005604375 podman[251843]: 2026-02-01 15:22:18.782287236 +0000 UTC m=+0.123283066 container died 5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:22:18 np0005604375 systemd[1]: var-lib-containers-storage-overlay-3ef28785b4f0cf66e584d52427f32756b39661b55c21688c4e015df1172a3e42-merged.mount: Deactivated successfully.
Feb  1 10:22:18 np0005604375 podman[251843]: 2026-02-01 15:22:18.814223981 +0000 UTC m=+0.155219711 container remove 5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gates, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  1 10:22:18 np0005604375 systemd[1]: libpod-conmon-5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0.scope: Deactivated successfully.
Feb  1 10:22:18 np0005604375 podman[251885]: 2026-02-01 15:22:18.954985965 +0000 UTC m=+0.033650694 container create 7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hugle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:22:18 np0005604375 systemd[1]: Started libpod-conmon-7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4.scope.
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:22:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:22:19 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:22:19 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1274e04ee8fb34caad3bc936eecd7ec37570dc018c040f67573a2ca178ef283a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:19 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1274e04ee8fb34caad3bc936eecd7ec37570dc018c040f67573a2ca178ef283a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:19 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1274e04ee8fb34caad3bc936eecd7ec37570dc018c040f67573a2ca178ef283a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:19 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1274e04ee8fb34caad3bc936eecd7ec37570dc018c040f67573a2ca178ef283a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:22:19 np0005604375 podman[251885]: 2026-02-01 15:22:19.03007407 +0000 UTC m=+0.108738799 container init 7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  1 10:22:19 np0005604375 podman[251885]: 2026-02-01 15:22:18.939431929 +0000 UTC m=+0.018096708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:22:19 np0005604375 podman[251885]: 2026-02-01 15:22:19.037009774 +0000 UTC m=+0.115674503 container start 7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hugle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:22:19 np0005604375 podman[251885]: 2026-02-01 15:22:19.04008862 +0000 UTC m=+0.118753369 container attach 7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hugle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:22:19 np0005604375 lvm[251977]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:22:19 np0005604375 lvm[251977]: VG ceph_vg0 finished
Feb  1 10:22:19 np0005604375 lvm[251980]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:22:19 np0005604375 lvm[251980]: VG ceph_vg1 finished
Feb  1 10:22:19 np0005604375 lvm[251982]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:22:19 np0005604375 lvm[251982]: VG ceph_vg2 finished
Feb  1 10:22:19 np0005604375 hardcore_hugle[251901]: {}
Feb  1 10:22:19 np0005604375 systemd[1]: libpod-7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4.scope: Deactivated successfully.
Feb  1 10:22:19 np0005604375 podman[251885]: 2026-02-01 15:22:19.779741089 +0000 UTC m=+0.858405818 container died 7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hugle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  1 10:22:19 np0005604375 systemd[1]: libpod-7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4.scope: Consumed 1.104s CPU time.
Feb  1 10:22:19 np0005604375 systemd[1]: var-lib-containers-storage-overlay-1274e04ee8fb34caad3bc936eecd7ec37570dc018c040f67573a2ca178ef283a-merged.mount: Deactivated successfully.
Feb  1 10:22:19 np0005604375 podman[251885]: 2026-02-01 15:22:19.820561273 +0000 UTC m=+0.899226002 container remove 7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 10:22:19 np0005604375 systemd[1]: libpod-conmon-7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4.scope: Deactivated successfully.
Feb  1 10:22:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:22:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:22:19 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Feb  1 10:22:19 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Feb  1 10:22:19 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Feb  1 10:22:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:19 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:22:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 4 op/s
Feb  1 10:22:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:22:22 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:22:22.028 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  1 10:22:22 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:22:22.030 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  1 10:22:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 61 KiB/s wr, 4 op/s
Feb  1 10:22:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 61 KiB/s wr, 4 op/s
Feb  1 10:22:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:22:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Feb  1 10:22:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Feb  1 10:22:26 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Feb  1 10:22:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 56 KiB/s wr, 3 op/s
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659720693395622 of space, bias 1.0, pg target 0.19979162080186866 quantized to 32 (current 32)
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005843620122686109 of space, bias 4.0, pg target 0.701234414722333 quantized to 16 (current 16)
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:22:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 476 B/s rd, 52 KiB/s wr, 3 op/s
Feb  1 10:22:29 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:22:29.032 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  1 10:22:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 1 op/s
Feb  1 10:22:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:22:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 0 op/s
Feb  1 10:22:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 0 op/s
Feb  1 10:22:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:22:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:22:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:22:39 np0005604375 podman[252023]: 2026-02-01 15:22:39.014080462 +0000 UTC m=+0.085645281 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb  1 10:22:39 np0005604375 podman[252024]: 2026-02-01 15:22:39.042285192 +0000 UTC m=+0.108706207 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  1 10:22:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:22:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:22:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.671093) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959362671128, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1364, "num_deletes": 256, "total_data_size": 2299292, "memory_usage": 2344096, "flush_reason": "Manual Compaction"}
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959362682557, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2233789, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25023, "largest_seqno": 26386, "table_properties": {"data_size": 2227249, "index_size": 3675, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14105, "raw_average_key_size": 20, "raw_value_size": 2213986, "raw_average_value_size": 3190, "num_data_blocks": 167, "num_entries": 694, "num_filter_entries": 694, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769959247, "oldest_key_time": 1769959247, "file_creation_time": 1769959362, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 11518 microseconds, and 6217 cpu microseconds.
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.682610) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2233789 bytes OK
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.682634) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.684424) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.684445) EVENT_LOG_v1 {"time_micros": 1769959362684437, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.684468) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2293100, prev total WAL file size 2293100, number of live WAL files 2.
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.685113) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2181KB)], [56(9679KB)]
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959362685192, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12145178, "oldest_snapshot_seqno": -1}
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5717 keys, 10526799 bytes, temperature: kUnknown
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959362765130, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10526799, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10484920, "index_size": 26473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 142619, "raw_average_key_size": 24, "raw_value_size": 10378801, "raw_average_value_size": 1815, "num_data_blocks": 1099, "num_entries": 5717, "num_filter_entries": 5717, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769959362, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.765446) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10526799 bytes
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.766584) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.2 rd, 131.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.5 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(10.1) write-amplify(4.7) OK, records in: 6245, records dropped: 528 output_compression: NoCompression
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.766608) EVENT_LOG_v1 {"time_micros": 1769959362766596, "job": 30, "event": "compaction_finished", "compaction_time_micros": 79817, "compaction_time_cpu_micros": 33010, "output_level": 6, "num_output_files": 1, "total_output_size": 10526799, "num_input_records": 6245, "num_output_records": 5717, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959362766944, "job": 30, "event": "table_file_deletion", "file_number": 58}
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959362768183, "job": 30, "event": "table_file_deletion", "file_number": 56}
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.685025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.768273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.768282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.768285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.768288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:22:42 np0005604375 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.768292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  1 10:22:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:22:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:22:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:22:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:22:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:22:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:22:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:22:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:22:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:22:48 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:22:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:22:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:22:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2022608717' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:22:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:22:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2022608717' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:22:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:22:52 np0005604375 nova_compute[238794]: 2026-02-01 15:22:52.145 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:22:52 np0005604375 nova_compute[238794]: 2026-02-01 15:22:52.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:22:52 np0005604375 nova_compute[238794]: 2026-02-01 15:22:52.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:22:52 np0005604375 nova_compute[238794]: 2026-02-01 15:22:52.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:22:52 np0005604375 nova_compute[238794]: 2026-02-01 15:22:52.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:22:52 np0005604375 nova_compute[238794]: 2026-02-01 15:22:52.340 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:22:52 np0005604375 nova_compute[238794]: 2026-02-01 15:22:52.340 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:22:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:22:53 np0005604375 nova_compute[238794]: 2026-02-01 15:22:53.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:22:53 np0005604375 nova_compute[238794]: 2026-02-01 15:22:53.321 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:22:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:22:56 np0005604375 nova_compute[238794]: 2026-02-01 15:22:56.322 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:22:56 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:22:56 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:22:57 np0005604375 nova_compute[238794]: 2026-02-01 15:22:57.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:22:58 np0005604375 nova_compute[238794]: 2026-02-01 15:22:58.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:22:58 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:00 np0005604375 nova_compute[238794]: 2026-02-01 15:23:00.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:23:00 np0005604375 nova_compute[238794]: 2026-02-01 15:23:00.347 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:23:00 np0005604375 nova_compute[238794]: 2026-02-01 15:23:00.347 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:23:00 np0005604375 nova_compute[238794]: 2026-02-01 15:23:00.347 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:23:00 np0005604375 nova_compute[238794]: 2026-02-01 15:23:00.348 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  1 10:23:00 np0005604375 nova_compute[238794]: 2026-02-01 15:23:00.348 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:23:00 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:00 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:23:00 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209366518' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:23:00 np0005604375 nova_compute[238794]: 2026-02-01 15:23:00.846 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:23:01 np0005604375 nova_compute[238794]: 2026-02-01 15:23:01.026 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  1 10:23:01 np0005604375 nova_compute[238794]: 2026-02-01 15:23:01.028 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5027MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  1 10:23:01 np0005604375 nova_compute[238794]: 2026-02-01 15:23:01.028 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:23:01 np0005604375 nova_compute[238794]: 2026-02-01 15:23:01.029 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:23:01 np0005604375 nova_compute[238794]: 2026-02-01 15:23:01.118 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  1 10:23:01 np0005604375 nova_compute[238794]: 2026-02-01 15:23:01.118 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  1 10:23:01 np0005604375 nova_compute[238794]: 2026-02-01 15:23:01.164 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  1 10:23:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:23:01 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  1 10:23:01 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1450958973' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  1 10:23:01 np0005604375 nova_compute[238794]: 2026-02-01 15:23:01.692 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  1 10:23:01 np0005604375 nova_compute[238794]: 2026-02-01 15:23:01.698 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  1 10:23:01 np0005604375 nova_compute[238794]: 2026-02-01 15:23:01.716 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  1 10:23:01 np0005604375 nova_compute[238794]: 2026-02-01 15:23:01.719 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  1 10:23:01 np0005604375 nova_compute[238794]: 2026-02-01 15:23:01.720 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:23:02 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:03 np0005604375 nova_compute[238794]: 2026-02-01 15:23:03.722 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:23:04 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:06 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:23:06 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:23:07.820 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  1 10:23:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:23:07.821 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  1 10:23:07 np0005604375 ovn_metadata_agent[154890]: 2026-02-01 15:23:07.821 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  1 10:23:08 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:10 np0005604375 podman[252113]: 2026-02-01 15:23:10.004534788 +0000 UTC m=+0.075885028 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  1 10:23:10 np0005604375 podman[252114]: 2026-02-01 15:23:10.090389294 +0000 UTC m=+0.160280763 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  1 10:23:10 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:11 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:23:12 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:14 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:16 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:23:16 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:23:17
Feb  1 10:23:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  1 10:23:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb  1 10:23:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'images']
Feb  1 10:23:17 np0005604375 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f82990cd130>)]
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825bee78b0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825be85340>)]
Feb  1 10:23:18 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb  1 10:23:19 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:23:20 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:23:20 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  1 10:23:20 np0005604375 podman[252302]: 2026-02-01 15:23:20.843582886 +0000 UTC m=+0.058220802 container create eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb  1 10:23:20 np0005604375 systemd[1]: Started libpod-conmon-eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876.scope.
Feb  1 10:23:20 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:23:20 np0005604375 podman[252302]: 2026-02-01 15:23:20.807781773 +0000 UTC m=+0.022419749 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:23:20 np0005604375 podman[252302]: 2026-02-01 15:23:20.911015016 +0000 UTC m=+0.125652902 container init eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chatelet, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:23:20 np0005604375 podman[252302]: 2026-02-01 15:23:20.919108153 +0000 UTC m=+0.133746039 container start eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chatelet, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:23:20 np0005604375 podman[252302]: 2026-02-01 15:23:20.922056595 +0000 UTC m=+0.136694511 container attach eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chatelet, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:23:20 np0005604375 serene_chatelet[252319]: 167 167
Feb  1 10:23:20 np0005604375 systemd[1]: libpod-eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876.scope: Deactivated successfully.
Feb  1 10:23:20 np0005604375 conmon[252319]: conmon eb94583fb4f036014659 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876.scope/container/memory.events
Feb  1 10:23:20 np0005604375 podman[252302]: 2026-02-01 15:23:20.923984969 +0000 UTC m=+0.138622885 container died eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chatelet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Feb  1 10:23:20 np0005604375 systemd[1]: var-lib-containers-storage-overlay-e153eb2dca1782aaf039009d7509ad4ce09ff96a616aac467cdb1af52f2173fe-merged.mount: Deactivated successfully.
Feb  1 10:23:20 np0005604375 podman[252302]: 2026-02-01 15:23:20.960376249 +0000 UTC m=+0.175014135 container remove eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Feb  1 10:23:20 np0005604375 systemd[1]: libpod-conmon-eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876.scope: Deactivated successfully.
Feb  1 10:23:21 np0005604375 podman[252343]: 2026-02-01 15:23:21.083758627 +0000 UTC m=+0.030323021 container create 91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  1 10:23:21 np0005604375 systemd[1]: Started libpod-conmon-91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead.scope.
Feb  1 10:23:21 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:23:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc0c04c7e1015f5bfb6cc43b2f233a7d738f071a17c7ce2f3880e32cf6b2a21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:23:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc0c04c7e1015f5bfb6cc43b2f233a7d738f071a17c7ce2f3880e32cf6b2a21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:23:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc0c04c7e1015f5bfb6cc43b2f233a7d738f071a17c7ce2f3880e32cf6b2a21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:23:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc0c04c7e1015f5bfb6cc43b2f233a7d738f071a17c7ce2f3880e32cf6b2a21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:23:21 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc0c04c7e1015f5bfb6cc43b2f233a7d738f071a17c7ce2f3880e32cf6b2a21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  1 10:23:21 np0005604375 podman[252343]: 2026-02-01 15:23:21.070052133 +0000 UTC m=+0.016616547 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:23:21 np0005604375 podman[252343]: 2026-02-01 15:23:21.179415178 +0000 UTC m=+0.125979652 container init 91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  1 10:23:21 np0005604375 podman[252343]: 2026-02-01 15:23:21.188030729 +0000 UTC m=+0.134595163 container start 91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:23:21 np0005604375 podman[252343]: 2026-02-01 15:23:21.19483021 +0000 UTC m=+0.141394634 container attach 91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:23:21 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:23:21 np0005604375 serene_lumiere[252360]: --> passed data devices: 0 physical, 3 LVM
Feb  1 10:23:21 np0005604375 serene_lumiere[252360]: --> All data devices are unavailable
Feb  1 10:23:21 np0005604375 systemd[1]: libpod-91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead.scope: Deactivated successfully.
Feb  1 10:23:21 np0005604375 podman[252380]: 2026-02-01 15:23:21.774660099 +0000 UTC m=+0.025375342 container died 91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  1 10:23:21 np0005604375 systemd[1]: var-lib-containers-storage-overlay-9fc0c04c7e1015f5bfb6cc43b2f233a7d738f071a17c7ce2f3880e32cf6b2a21-merged.mount: Deactivated successfully.
Feb  1 10:23:21 np0005604375 podman[252380]: 2026-02-01 15:23:21.806711508 +0000 UTC m=+0.057426721 container remove 91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:23:21 np0005604375 systemd[1]: libpod-conmon-91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead.scope: Deactivated successfully.
Feb  1 10:23:21 np0005604375 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.viosrg(active, since 33m)
Feb  1 10:23:22 np0005604375 podman[252456]: 2026-02-01 15:23:22.213787856 +0000 UTC m=+0.031425222 container create 210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  1 10:23:22 np0005604375 systemd[1]: Started libpod-conmon-210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b.scope.
Feb  1 10:23:22 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:23:22 np0005604375 podman[252456]: 2026-02-01 15:23:22.280542237 +0000 UTC m=+0.098179643 container init 210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:23:22 np0005604375 podman[252456]: 2026-02-01 15:23:22.287136151 +0000 UTC m=+0.104773507 container start 210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  1 10:23:22 np0005604375 podman[252456]: 2026-02-01 15:23:22.290064734 +0000 UTC m=+0.107702120 container attach 210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  1 10:23:22 np0005604375 systemd[1]: libpod-210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b.scope: Deactivated successfully.
Feb  1 10:23:22 np0005604375 laughing_dhawan[252472]: 167 167
Feb  1 10:23:22 np0005604375 podman[252456]: 2026-02-01 15:23:22.291395871 +0000 UTC m=+0.109033277 container died 210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  1 10:23:22 np0005604375 podman[252456]: 2026-02-01 15:23:22.20036358 +0000 UTC m=+0.018000966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:23:22 np0005604375 systemd[1]: var-lib-containers-storage-overlay-256f3ba90a51354c474f352424ecb085c5dc2ddac564003353c5af6214c64dad-merged.mount: Deactivated successfully.
Feb  1 10:23:22 np0005604375 podman[252456]: 2026-02-01 15:23:22.329811817 +0000 UTC m=+0.147449183 container remove 210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  1 10:23:22 np0005604375 systemd[1]: libpod-conmon-210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b.scope: Deactivated successfully.
Feb  1 10:23:22 np0005604375 podman[252496]: 2026-02-01 15:23:22.489634216 +0000 UTC m=+0.064652933 container create ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  1 10:23:22 np0005604375 systemd[1]: Started libpod-conmon-ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03.scope.
Feb  1 10:23:22 np0005604375 podman[252496]: 2026-02-01 15:23:22.461284942 +0000 UTC m=+0.036303709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:23:22 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:23:22 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb  1 10:23:22 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde073e37bbdef63b21476137ab97ab6d383a8919925073b332702c0b830e334/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:23:22 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde073e37bbdef63b21476137ab97ab6d383a8919925073b332702c0b830e334/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:23:22 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde073e37bbdef63b21476137ab97ab6d383a8919925073b332702c0b830e334/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:23:22 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde073e37bbdef63b21476137ab97ab6d383a8919925073b332702c0b830e334/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:23:22 np0005604375 podman[252496]: 2026-02-01 15:23:22.596239914 +0000 UTC m=+0.171258661 container init ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_dubinsky, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  1 10:23:22 np0005604375 podman[252496]: 2026-02-01 15:23:22.60787507 +0000 UTC m=+0.182893787 container start ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  1 10:23:22 np0005604375 podman[252496]: 2026-02-01 15:23:22.612060567 +0000 UTC m=+0.187079254 container attach ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]: {
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:    "0": [
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:        {
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "devices": [
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "/dev/loop3"
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            ],
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_name": "ceph_lv0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_size": "21470642176",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "name": "ceph_lv0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "tags": {
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.cluster_name": "ceph",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.crush_device_class": "",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.encrypted": "0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.objectstore": "bluestore",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.osd_id": "0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.type": "block",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.vdo": "0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.with_tpm": "0"
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            },
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "type": "block",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "vg_name": "ceph_vg0"
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:        }
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:    ],
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:    "1": [
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:        {
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "devices": [
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "/dev/loop4"
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            ],
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_name": "ceph_lv1",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_size": "21470642176",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "name": "ceph_lv1",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "tags": {
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.cluster_name": "ceph",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.crush_device_class": "",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.encrypted": "0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.objectstore": "bluestore",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.osd_id": "1",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.type": "block",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.vdo": "0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.with_tpm": "0"
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            },
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "type": "block",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "vg_name": "ceph_vg1"
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:        }
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:    ],
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:    "2": [
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:        {
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "devices": [
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "/dev/loop5"
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            ],
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_name": "ceph_lv2",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_size": "21470642176",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "name": "ceph_lv2",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "tags": {
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.cephx_lockbox_secret": "",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.cluster_name": "ceph",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.crush_device_class": "",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.encrypted": "0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.objectstore": "bluestore",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.osd_id": "2",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.type": "block",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.vdo": "0",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:                "ceph.with_tpm": "0"
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            },
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "type": "block",
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:            "vg_name": "ceph_vg2"
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:        }
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]:    ]
Feb  1 10:23:22 np0005604375 festive_dubinsky[252513]: }
Feb  1 10:23:22 np0005604375 systemd[1]: libpod-ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03.scope: Deactivated successfully.
Feb  1 10:23:22 np0005604375 podman[252523]: 2026-02-01 15:23:22.920096859 +0000 UTC m=+0.024720923 container died ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_dubinsky, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  1 10:23:22 np0005604375 systemd[1]: var-lib-containers-storage-overlay-cde073e37bbdef63b21476137ab97ab6d383a8919925073b332702c0b830e334-merged.mount: Deactivated successfully.
Feb  1 10:23:22 np0005604375 podman[252523]: 2026-02-01 15:23:22.961988993 +0000 UTC m=+0.066613047 container remove ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:23:22 np0005604375 systemd[1]: libpod-conmon-ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03.scope: Deactivated successfully.
Feb  1 10:23:23 np0005604375 podman[252600]: 2026-02-01 15:23:23.402265762 +0000 UTC m=+0.052432951 container create 70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banzai, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  1 10:23:23 np0005604375 systemd[1]: Started libpod-conmon-70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c.scope.
Feb  1 10:23:23 np0005604375 podman[252600]: 2026-02-01 15:23:23.377619361 +0000 UTC m=+0.027786600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:23:23 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:23:23 np0005604375 podman[252600]: 2026-02-01 15:23:23.487408968 +0000 UTC m=+0.137576147 container init 70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  1 10:23:23 np0005604375 podman[252600]: 2026-02-01 15:23:23.494840386 +0000 UTC m=+0.145007555 container start 70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banzai, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  1 10:23:23 np0005604375 reverent_banzai[252617]: 167 167
Feb  1 10:23:23 np0005604375 podman[252600]: 2026-02-01 15:23:23.498744165 +0000 UTC m=+0.148911324 container attach 70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banzai, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  1 10:23:23 np0005604375 systemd[1]: libpod-70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c.scope: Deactivated successfully.
Feb  1 10:23:23 np0005604375 podman[252600]: 2026-02-01 15:23:23.499730073 +0000 UTC m=+0.149897232 container died 70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banzai, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  1 10:23:23 np0005604375 systemd[1]: var-lib-containers-storage-overlay-2fda374ed9df44ca5d42fc3f5de54c7ce3838a13102a99292312174acdafd9c7-merged.mount: Deactivated successfully.
Feb  1 10:23:23 np0005604375 podman[252600]: 2026-02-01 15:23:23.532363798 +0000 UTC m=+0.182530947 container remove 70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banzai, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  1 10:23:23 np0005604375 systemd[1]: libpod-conmon-70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c.scope: Deactivated successfully.
Feb  1 10:23:23 np0005604375 podman[252641]: 2026-02-01 15:23:23.65447065 +0000 UTC m=+0.034722315 container create bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  1 10:23:23 np0005604375 systemd[1]: Started libpod-conmon-bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c.scope.
Feb  1 10:23:23 np0005604375 systemd[1]: Started libcrun container.
Feb  1 10:23:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b6073192ea1bfe87e5efbfaf015a64f10a8d0b2ab439d418afbc25d11366cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  1 10:23:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b6073192ea1bfe87e5efbfaf015a64f10a8d0b2ab439d418afbc25d11366cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  1 10:23:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b6073192ea1bfe87e5efbfaf015a64f10a8d0b2ab439d418afbc25d11366cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  1 10:23:23 np0005604375 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b6073192ea1bfe87e5efbfaf015a64f10a8d0b2ab439d418afbc25d11366cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  1 10:23:23 np0005604375 podman[252641]: 2026-02-01 15:23:23.63914646 +0000 UTC m=+0.019398145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  1 10:23:23 np0005604375 podman[252641]: 2026-02-01 15:23:23.735156891 +0000 UTC m=+0.115408606 container init bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  1 10:23:23 np0005604375 podman[252641]: 2026-02-01 15:23:23.74334167 +0000 UTC m=+0.123593375 container start bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_colden, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  1 10:23:23 np0005604375 podman[252641]: 2026-02-01 15:23:23.747523967 +0000 UTC m=+0.127775682 container attach bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_colden, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  1 10:23:24 np0005604375 lvm[252738]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:23:24 np0005604375 lvm[252738]: VG ceph_vg1 finished
Feb  1 10:23:24 np0005604375 lvm[252737]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:23:24 np0005604375 lvm[252737]: VG ceph_vg0 finished
Feb  1 10:23:24 np0005604375 lvm[252740]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:23:24 np0005604375 lvm[252740]: VG ceph_vg2 finished
Feb  1 10:23:24 np0005604375 happy_colden[252658]: {}
Feb  1 10:23:24 np0005604375 systemd[1]: libpod-bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c.scope: Deactivated successfully.
Feb  1 10:23:24 np0005604375 podman[252641]: 2026-02-01 15:23:24.532916358 +0000 UTC m=+0.913168043 container died bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_colden, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  1 10:23:24 np0005604375 systemd[1]: libpod-bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c.scope: Consumed 1.177s CPU time.
Feb  1 10:23:24 np0005604375 systemd[1]: var-lib-containers-storage-overlay-20b6073192ea1bfe87e5efbfaf015a64f10a8d0b2ab439d418afbc25d11366cb-merged.mount: Deactivated successfully.
Feb  1 10:23:24 np0005604375 podman[252641]: 2026-02-01 15:23:24.573748422 +0000 UTC m=+0.954000127 container remove bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  1 10:23:24 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb  1 10:23:24 np0005604375 systemd[1]: libpod-conmon-bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c.scope: Deactivated successfully.
Feb  1 10:23:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  1 10:23:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:23:24 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  1 10:23:24 np0005604375 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:23:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:23:25 np0005604375 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb  1 10:23:26 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:23:26 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659720693395622 of space, bias 1.0, pg target 0.19979162080186866 quantized to 32 (current 32)
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005843358214668303 of space, bias 4.0, pg target 0.7012029857601964 quantized to 16 (current 16)
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  1 10:23:28 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb  1 10:23:30 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb  1 10:23:31 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:23:32 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Feb  1 10:23:33 np0005604375 systemd-logind[786]: New session 51 of user zuul.
Feb  1 10:23:33 np0005604375 systemd[1]: Started Session 51 of User zuul.
Feb  1 10:23:34 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:35 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:36 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:23:36 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14504 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:36 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:37 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb  1 10:23:37 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1008845376' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb  1 10:23:38 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:40 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:41 np0005604375 podman[253081]: 2026-02-01 15:23:41.000936647 +0000 UTC m=+0.080271381 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Feb  1 10:23:41 np0005604375 podman[253082]: 2026-02-01 15:23:41.030388252 +0000 UTC m=+0.109890801 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  1 10:23:41 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:23:42 np0005604375 ovs-vsctl[253154]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb  1 10:23:42 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:43 np0005604375 virtqemud[238696]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb  1 10:23:43 np0005604375 virtqemud[238696]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb  1 10:23:43 np0005604375 virtqemud[238696]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb  1 10:23:43 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: cache status {prefix=cache status} (starting...)
Feb  1 10:23:43 np0005604375 lvm[253478]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  1 10:23:43 np0005604375 lvm[253478]: VG ceph_vg2 finished
Feb  1 10:23:43 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: client ls {prefix=client ls} (starting...)
Feb  1 10:23:44 np0005604375 lvm[253513]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  1 10:23:44 np0005604375 lvm[253513]: VG ceph_vg0 finished
Feb  1 10:23:44 np0005604375 lvm[253518]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  1 10:23:44 np0005604375 lvm[253518]: VG ceph_vg1 finished
Feb  1 10:23:44 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14508 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:44 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: damage ls {prefix=damage ls} (starting...)
Feb  1 10:23:44 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: dump loads {prefix=dump loads} (starting...)
Feb  1 10:23:44 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:44 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14510 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:44 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb  1 10:23:44 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb  1 10:23:45 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14514 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:45 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb  1 10:23:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Feb  1 10:23:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2063533233' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb  1 10:23:45 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb  1 10:23:45 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb  1 10:23:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  1 10:23:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1334816464' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  1 10:23:45 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14516 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:45 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  1 10:23:45 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:23:45.491+0000 7f8298063640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  1 10:23:45 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb  1 10:23:45 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: ops {prefix=ops} (starting...)
Feb  1 10:23:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Feb  1 10:23:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1467706821' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb  1 10:23:45 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb  1 10:23:45 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596127942' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb  1 10:23:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb  1 10:23:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592022745' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb  1 10:23:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:23:46 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session ls {prefix=session ls} (starting...)
Feb  1 10:23:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb  1 10:23:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3229577900' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb  1 10:23:46 np0005604375 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: status {prefix=status} (starting...)
Feb  1 10:23:46 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:46 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14528 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:46 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb  1 10:23:46 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822312700' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb  1 10:23:47 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14532 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb  1 10:23:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834918764' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb  1 10:23:47 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Feb  1 10:23:47 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/762072287' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb  1 10:23:48 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  1 10:23:48 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2730721995' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb  1 10:23:48 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:49 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:23:49 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:23:49 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:23:49 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:23:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb  1 10:23:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3012918075' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb  1 10:23:49 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb  1 10:23:49 np0005604375 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb  1 10:23:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb  1 10:23:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1738769593' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb  1 10:23:49 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb  1 10:23:49 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1066002277' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb  1 10:23:49 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14546 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:49 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  1 10:23:49 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:23:49.802+0000 7f8298063640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  1 10:23:50 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14548 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb  1 10:23:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1874586709' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb  1 10:23:50 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:50 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14552 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb  1 10:23:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1002663905' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb  1 10:23:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  1 10:23:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3867469104' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  1 10:23:50 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  1 10:23:50 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3867469104' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  1 10:23:51 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} v 0)
Feb  1 10:23:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} : dispatch
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 1335296 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 712936 data_alloc: 218103808 data_used: 4907
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 1327104 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 1327104 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19(unlocked)] enter Initial
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=0 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000132 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=0 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000040
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000306 1 0.000144
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000266 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000631 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 1310720 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 99 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001632 2 0.000372
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.002386 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.002429 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000125 1 0.000182
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000010 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 100 heartbeat osd_stat(store_statfs(0x4fcef7000/0x0/0x4ffc00000, data 0x94d9b/0x133000, compress 0x0/0x0/0x0, omap 0xd31a, meta 0x2bc2ce6), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 1302528 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 0'0 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=57'487 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.003290 6 0.000056
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 0'0 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=57'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 0'0 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=57'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 38'60 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003966 3 0.000144
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 38'60 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 38'60 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000073 1 0.000068
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 38'60 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.063830 1 0.000049
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 1318912 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746573 data_alloc: 218103808 data_used: 4907
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.600705147s of 10.651138306s, submitted: 32
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.967675 1 0.000051
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 active+remapped mbc={}] exit Started/ReplicaActive 1.035690 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 active+remapped mbc={}] exit Started 2.039033 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 active+remapped mbc={}] enter Reset
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 pct=0'0 crt=57'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] exit Reset 0.000207 1 0.000277
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Started
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Start
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] exit Start 0.000039 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Started/Primary
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000129
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001303 3 0.000067
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000034 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.a scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1236992 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.a scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 102 heartbeat osd_stat(store_statfs(0x4fceeb000/0x0/0x4ffc00000, data 0x99fdd/0x13f000, compress 0x0/0x0/0x0, omap 0xdabb, meta 0x2bc2545), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002432 2 0.000130
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003907 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=102/55 les/c/f=103/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003832 3 0.000287
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=102/55 les/c/f=103/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=102/55 les/c/f=103/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=102/55 les/c/f=103/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 188416 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 188416 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 147456 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 147456 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 753638 data_alloc: 218103808 data_used: 4907
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.c scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.c scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68984832 unmapped: 131072 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 122880 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fceea000/0x0/0x4ffc00000, data 0x9ba2c/0x142000, compress 0x0/0x0/0x0, omap 0xdd46, meta 0x2bc22ba), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.e scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.e scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 122880 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 114688 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 114688 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 760772 data_alloc: 218103808 data_used: 4907
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fcee5000/0x0/0x4ffc00000, data 0x9d5c8/0x145000, compress 0x0/0x0/0x0, omap 0xdfd1, meta 0x2bc202f), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=57'487 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 43.935603 77 0.000345
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=57'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active 43.940417 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=57'487 mlcod 0'0 active mbc={}] exit Started/Primary 44.947390 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=57'487 mlcod 0'0 active mbc={}] exit Started 44.947452 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=57'487 mlcod 0'0 active mbc={}] enter Reset
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064671516s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 active pruub 187.699172974s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] exit Reset 0.000079 1 0.000131
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] enter Started
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] enter Start
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] exit Start 0.000009 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] enter Started/Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.079073906s of 10.115900993s, submitted: 19
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started/Stray 0.802265 3 0.000164
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started 0.802306 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY mbc={}] enter Reset
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] exit Reset 0.000075 1 0.000104
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] enter Started
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] enter Start
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000039
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000030 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.d scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.d scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 81920 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011911 4 0.000081
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.012051 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.004974 5 0.000388
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000136 1 0.000064
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000519 1 0.000190
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.063635 2 0.000106
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 0 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 107 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.957100 1 0.000079
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] exit Started/Primary/Active 1.026742 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] exit Started/Primary 2.038831 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] exit Started 2.038865 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] enter Reset
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977913857s) [0] async=[0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 active pruub 193.453887939s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] exit Reset 0.000368 1 0.000452
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] enter Started
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] enter Start
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] exit Start 0.000041 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] enter Started/Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 983040 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 983040 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started/Stray 1.265489 6 0.000170
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001688 2 0.000079
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] lb MIN local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 DELETING pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.069028 2 0.000378
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] lb MIN local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started/ToDelete 0.070794 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] lb MIN local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started 1.336395 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fced9000/0x0/0x4ffc00000, data 0xa40b1/0x151000, compress 0x0/0x0/0x0, omap 0xe9fd, meta 0x2bc1603), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 764285 data_alloc: 218103808 data_used: 4907
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 942080 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fced7000/0x0/0x4ffc00000, data 0xa595e/0x151000, compress 0x0/0x0/0x0, omap 0xec88, meta 0x2bc1378), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 942080 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fced7000/0x0/0x4ffc00000, data 0xa595e/0x151000, compress 0x0/0x0/0x0, omap 0xec88, meta 0x2bc1378), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 933888 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 909312 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 909312 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 109 handle_osd_map epochs [110,111], i have 109, src has [1,111]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=57'485 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 77.680807 137 0.000527
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=57'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active 77.686886 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=57'485 mlcod 0'0 active mbc={}] exit Started/Primary 78.705678 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=57'485 mlcod 0'0 active mbc={}] exit Started 78.705714 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=57'485 mlcod 0'0 active mbc={}] enter Reset
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319766998s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 active pruub 195.875000000s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] exit Reset 0.000084 1 0.000128
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] enter Started
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] enter Start
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] exit Start 0.000009 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] enter Started/Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 111 handle_osd_map epochs [110,111], i have 111, src has [1,111]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772342 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 909312 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.003470421s of 10.054692268s, submitted: 31
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started/Stray 1.013615 3 0.000043
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started 1.013702 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY mbc={}] enter Reset
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] exit Reset 0.000142 1 0.000216
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] enter Started
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] enter Start
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000051
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000096 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 901120 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 112 heartbeat osd_stat(store_statfs(0x4fced3000/0x0/0x4ffc00000, data 0xa9096/0x157000, compress 0x0/0x0/0x0, omap 0xef13, meta 0x2bc10ed), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.015991 4 0.000127
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.016193 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 113 handle_osd_map epochs [112,113], i have 113, src has [1,113]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 884736 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.252483 5 0.000380
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000093 1 0.000075
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000338 1 0.000037
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.044640 2 0.000046
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 113 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.709325 1 0.000091
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007180 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] exit Started/Primary 2.023417 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] exit Started 2.023451 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] enter Reset
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244387627s) [0] async=[0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 active pruub 203.837112427s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] exit Reset 0.000261 1 0.000378
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] enter Started
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] enter Start
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] exit Start 0.000014 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] enter Started/Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 868352 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started/Stray 1.014347 7 0.000120
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000109 1 0.000090
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] lb MIN local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 DELETING pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.047704 2 0.000323
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] lb MIN local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started/ToDelete 0.047921 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] lb MIN local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started 1.062361 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 868352 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 775463 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 802816 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 802816 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec4000/0x0/0x4ffc00000, data 0xaf99a/0x162000, compress 0x0/0x0/0x0, omap 0xf93f, meta 0x2bc06c1), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 802816 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=38'483 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 84.769833 150 0.000511
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=38'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active 84.772597 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=38'483 mlcod 0'0 active mbc={}] exit Started/Primary 85.781004 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=38'483 mlcod 0'0 active mbc={}] exit Started 85.781064 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=38'483 mlcod 0'0 active mbc={}] enter Reset
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.231036186s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 active pruub 204.880294800s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] exit Reset 0.000120 1 0.000206
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] enter Started
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] enter Start
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] exit Start 0.000014 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] enter Started/Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 802816 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started/Stray 1.022983 3 0.000067
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started 1.023036 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY mbc={}] enter Reset
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] exit Reset 0.000083 1 0.000120
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] enter Started
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] enter Start
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000045
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000066 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1843200 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782960 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011094 4 0.000105
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.011263 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1843200 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.596240 5 0.000849
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000221 1 0.000121
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000433 1 0.000046
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.039138 2 0.000081
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.136539459s of 10.210332870s, submitted: 32
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 118 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.381405 1 0.000071
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] exit Started/Primary/Active 1.018112 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] exit Started/Primary 2.029400 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] exit Started 2.029427 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] enter Reset
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578289032s) [1] async=[1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 active pruub 212.280242920s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] exit Reset 0.000117 1 0.000186
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] enter Started
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] enter Start
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] state<Start>: transitioning to Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] exit Start 0.000009 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] enter Started/Stray
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 1835008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 1835008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started/Stray 1.012802 7 0.000102
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000121 1 0.000084
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] lb MIN local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 DELETING pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.039371 2 0.000244
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] lb MIN local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039567 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] lb MIN local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started 1.052435 0 0.000000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb6483/0x16e000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 1802240 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 1785856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 788020 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1777664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 1794048 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1777664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1777664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 1769472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795259 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 1769472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 1769472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1761280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1761280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1761280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 797672 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 1753088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.676446915s of 14.723983765s, submitted: 18
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 1753088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.c scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.c scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 1720320 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 1720320 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1712128 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802494 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1712128 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1712128 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.d scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.d scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1703936 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1703936 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1695744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 804907 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1695744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1695744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 1687552 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 1687552 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1679360 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 804907 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1679360 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1679360 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1671168 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1671168 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 1662976 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.070312500s of 19.081048965s, submitted: 6
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807318 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 1662976 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 1654784 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 1646592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1638400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1638400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 809729 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1630208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1630208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1622016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1622016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1622016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.b scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.b scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 814553 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1613824 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1613824 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1605632 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1605632 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 1597440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 814553 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1589248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1589248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1589248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.e scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.962125778s of 17.975889206s, submitted: 8
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.e scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1572864 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1572864 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 816964 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1564672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1564672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1556480 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1556480 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1556480 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 816964 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1548288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1548288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1540096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.124578476s of 10.131445885s, submitted: 4
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1540096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1540096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824197 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 1523712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 1523712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1515520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1515520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1515520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826608 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 1507328 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 1507328 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 1499136 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 1490944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.085702896s of 11.097998619s, submitted: 8
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 1490944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.d scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.d scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833845 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 1482752 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 1474560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.a scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.a scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 1474560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 1458176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 1458176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841078 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1449984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.a scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.a scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1449984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1441792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1441792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1441792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.029262543s of 11.049038887s, submitted: 12
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 845902 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1417216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1417216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.e scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.e scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1409024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1409024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1400832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848313 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1400832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1400832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1384448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1376256 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1368064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853141 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1368064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.821199417s of 10.835725784s, submitted: 8
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1351680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1343488 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1343488 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1335296 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857971 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1310720 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1302528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1302528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1294336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1286144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862797 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1286144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.051155090s of 10.064999580s, submitted: 8
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1294336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1294336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1286144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1269760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 867627 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1261568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1261568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1261568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 1236992 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1228800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 872455 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1228800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1220608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1220608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1212416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1212416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 872455 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1204224 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 1187840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 1187840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1179648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.166114807s of 18.179595947s, submitted: 8
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1179648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877283 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1179648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1171456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1171456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 1163264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 1163264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886937 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 1163264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1155072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1155072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1146880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1146880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886937 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1146880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1138688 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.858505249s of 12.879987717s, submitted: 12
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1138688 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1130496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1130496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889350 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1122304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.e scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.e scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1105920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1105920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1097728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1097728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898998 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1089536 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1097728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 6.f scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 6.f scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1089536 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1081344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.e scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.793622971s of 11.888650894s, submitted: 12
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.e scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1081344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906231 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1064960 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1048576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1048576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1040384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1015808 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908644 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1015808 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1015808 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1007616 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1007616 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 999424 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908644 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 999424 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 983040 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.f scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.192355156s of 13.201869965s, submitted: 6
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.f scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 983040 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.c scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.c scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 966656 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 966656 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915877 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 950272 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 942080 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 933888 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 933888 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 933888 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920701 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 925696 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 925696 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 917504 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 917504 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 909312 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920701 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 909312 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.002473831s of 14.233125687s, submitted: 10
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 884736 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 860160 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 860160 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 860160 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 851968 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 835584 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 827392 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 827392 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 819200 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 819200 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 819200 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 811008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 811008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 802816 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 811008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 811008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 802816 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 802816 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 794624 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 794624 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 794624 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 786432 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 786432 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 778240 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 778240 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 778240 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 770048 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 770048 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 770048 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 761856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 761856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 761856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 753664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 753664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 745472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 745472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 745472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 737280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 737280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 729088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 729088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 729088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70492160 unmapped: 720896 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70492160 unmapped: 720896 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 712704 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 712704 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 712704 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 704512 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 704512 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 704512 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 696320 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 696320 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 688128 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 688128 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 688128 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 679936 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 679936 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70541312 unmapped: 671744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70541312 unmapped: 671744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70541312 unmapped: 671744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 663552 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 663552 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 663552 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 655360 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 647168 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 638976 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 638976 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70582272 unmapped: 630784 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70582272 unmapped: 630784 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 622592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 622592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 622592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 614400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 614400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 614400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 606208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 606208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 606208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 598016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 598016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 598016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 589824 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 589824 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 581632 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 581632 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 573440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 573440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 573440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 565248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 565248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 565248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 557056 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 557056 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 548864 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 540672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 540672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 532480 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 532480 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 524288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 524288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 524288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 516096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 516096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 507904 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 507904 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 507904 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 499712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 499712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 499712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 491520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 483328 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 475136 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 475136 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 466944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 466944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 466944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 458752 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 458752 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 450560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 450560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 442368 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 434176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 434176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 434176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 425984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 425984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 417792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 417792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 417792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 409600 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 401408 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 393216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 393216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 393216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 385024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 385024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 385024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 376832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 376832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 376832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 368640 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 352256 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 352256 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 352256 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 344064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 344064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 344064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 335872 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 335872 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 327680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 327680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 327680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 311296 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 311296 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 294912 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 294912 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 294912 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 286720 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 286720 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 278528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 278528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 262144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 229376 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 229376 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 221184 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 221184 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 221184 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 212992 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 212992 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 204800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 204800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 204800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 204800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 196608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 196608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 188416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 188416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 188416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 180224 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 180224 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 172032 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 172032 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 172032 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 163840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 163840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 155648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 155648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 155648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 139264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 139264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 139264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 139264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 122880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 106496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 106496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 90112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 90112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 81920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5503 writes, 23K keys, 5503 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5503 writes, 810 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5503 writes, 23K keys, 5503 commit groups, 1.0 writes per commit group, ingest: 18.44 MB, 0.03 MB/s#012Interval WAL: 5503 writes, 810 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 24576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 24576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 8192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 8192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 0 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 0 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1040384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1040384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1040384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1032192 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1032192 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1015808 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1015808 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 999424 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 999424 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 999424 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 991232 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 991232 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 983040 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 983040 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 983040 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 974848 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 974848 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 933888 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 933888 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 276.334167480s of 276.342437744s, submitted: 4
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 925696 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 827392 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 1581056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 1581056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 1581056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 1581056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 1581056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 1581056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 1572864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 1572864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 1564672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 1564672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 1564672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 1556480 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 1556480 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 1548288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 1548288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 1540096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 1540096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 1531904 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 1531904 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 1531904 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 1523712 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 1515520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 1515520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 1507328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 1507328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1499136 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1499136 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1499136 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1490944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1490944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 1482752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 1466368 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1449984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1449984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1449984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 1441792 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 1441792 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 1433600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 1433600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 1433600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 1425408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 1425408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 1417216 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 1417216 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 1417216 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 1409024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 1409024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 1409024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 1400832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 1400832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 1392640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 1392640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 1392640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1384448 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1384448 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 1376256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 1376256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 1376256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 1368064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 1368064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 1359872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 1359872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 1351680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 1351680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 1343488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 1343488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 1343488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 1335296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 1335296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 1318912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 1318912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 1302528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 1302528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 1302528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 1302528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 1302528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 1302528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1294336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1294336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1294336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1294336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1294336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1253376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1253376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1253376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1253376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1253376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 1228800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc ms_handle_reset ms_handle_reset con 0x560d7ff3a000
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3695062931
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_configure stats_period=5
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 1228800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 1228800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 1228800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.010803223s of 300.151306152s, submitted: 90
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 1130496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1040384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1040384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5731 writes, 24K keys, 5731 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5731 writes, 924 syncs, 6.20 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 942080 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.904083252s of 299.935333252s, submitted: 24
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 901120 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 417792 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 417792 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 344064 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 120 handle_osd_map epochs [121,122], i have 120, src has [1,122]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 45.135498047s of 45.419361115s, submitted: 90
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 221184 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980548 data_alloc: 218103808 data_used: 6997
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 123 ms_handle_reset con 0x560d80adf400 session 0x560d7ff9e540
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 16801792 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 24027136 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb0000/0x0/0x4ffc00000, data 0x10bd1eb/0x117c000, compress 0x0/0x0/0x0, omap 0x10eb5, meta 0x2bbf14b), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbeb0000/0x0/0x4ffc00000, data 0x10bd1eb/0x117c000, compress 0x0/0x0/0x0, omap 0x10eb5, meta 0x2bbf14b), peers [0,1] op hist [1])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 124 ms_handle_reset con 0x560d82092800 session 0x560d82910380
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1032047 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbeaa000/0x0/0x4ffc00000, data 0x10bedc6/0x1180000, compress 0x0/0x0/0x0, omap 0x11261, meta 0x2bbed9f), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034533 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c0845/0x1183000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034533 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c0845/0x1183000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c0845/0x1183000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c0845/0x1183000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034533 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c0845/0x1183000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c0845/0x1183000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.540817261s of 24.730327606s, submitted: 57
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 23855104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea6000/0x0/0x4ffc00000, data 0x10c08e0/0x1184000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037197 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 23805952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Got map version 10
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea6000/0x0/0x4ffc00000, data 0x10c0a16/0x1186000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038745 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea6000/0x0/0x4ffc00000, data 0x10c0a16/0x1186000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea6000/0x0/0x4ffc00000, data 0x10c0a16/0x1186000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Got map version 11
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 23658496 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea6000/0x0/0x4ffc00000, data 0x10c0a16/0x1186000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.054696083s of 11.060445786s, submitted: 3
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1036335 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 23519232 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 23511040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 23511040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 23511040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 23511040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035361 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 23511040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea8000/0x0/0x4ffc00000, data 0x10c08e0/0x1184000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 23511040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbea3000/0x0/0x4ffc00000, data 0x10c24e5/0x1187000, compress 0x0/0x0/0x0, omap 0x117c4, meta 0x2bbe83c), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.055249214s of 10.100062370s, submitted: 26
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1040547 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbea2000/0x0/0x4ffc00000, data 0x10c2580/0x1188000, compress 0x0/0x0/0x0, omap 0x117c4, meta 0x2bbe83c), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043177 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea0000/0x0/0x4ffc00000, data 0x10c3f64/0x118a000, compress 0x0/0x0/0x0, omap 0x11a9d, meta 0x2bbe563), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea0000/0x0/0x4ffc00000, data 0x10c3f64/0x118a000, compress 0x0/0x0/0x0, omap 0x11a9d, meta 0x2bbe563), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042587 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.503337860s of 10.511715889s, submitted: 14
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea0000/0x0/0x4ffc00000, data 0x10c3f64/0x118a000, compress 0x0/0x0/0x0, omap 0x11a9d, meta 0x2bbe563), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea0000/0x0/0x4ffc00000, data 0x10c3f64/0x118a000, compress 0x0/0x0/0x0, omap 0x11a9d, meta 0x2bbe563), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 23494656 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 23494656 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 23486464 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 23486464 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea1000/0x0/0x4ffc00000, data 0x10c3fff/0x118b000, compress 0x0/0x0/0x0, omap 0x11a9d, meta 0x2bbe563), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043559 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 23486464 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea1000/0x0/0x4ffc00000, data 0x10c3fff/0x118b000, compress 0x0/0x0/0x0, omap 0x11a9d, meta 0x2bbe563), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 23486464 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 23478272 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 23478272 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 23478272 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042825 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 23478272 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.553779602s of 10.561478615s, submitted: 3
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fbe9d000/0x0/0x4ffc00000, data 0x10c5b69/0x118d000, compress 0x0/0x0/0x0, omap 0x11d28, meta 0x2bbe2d8), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fbe9d000/0x0/0x4ffc00000, data 0x10c5b69/0x118d000, compress 0x0/0x0/0x0, omap 0x11d28, meta 0x2bbe2d8), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046319 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 23445504 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fbe9b000/0x0/0x4ffc00000, data 0x10c754d/0x118f000, compress 0x0/0x0/0x0, omap 0x12001, meta 0x2bbdfff), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 23445504 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051741 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 23445504 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 23437312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 23437312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 23437312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 23437312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fbe96000/0x0/0x4ffc00000, data 0x10c9182/0x1192000, compress 0x0/0x0/0x0, omap 0x1228c, meta 0x2bbdd74), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051741 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 23437312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.750567436s of 14.883956909s, submitted: 62
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 23429120 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 23429120 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 23420928 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fbe92000/0x0/0x4ffc00000, data 0x10cadf2/0x1198000, compress 0x0/0x0/0x0, omap 0x125e4, meta 0x2bbda1c), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 23412736 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059973 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 23412736 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fbe90000/0x0/0x4ffc00000, data 0x10cae20/0x1198000, compress 0x0/0x0/0x0, omap 0x125e4, meta 0x2bbda1c), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 23658496 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 23658496 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 23658496 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fbe94000/0x0/0x4ffc00000, data 0x10cc826/0x1198000, compress 0x0/0x0/0x0, omap 0x1286f, meta 0x2bbd791), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 23658496 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061405 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 22609920 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 22609920 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 132 handle_osd_map epochs [133,134], i have 132, src has [1,134]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.087007523s of 11.162994385s, submitted: 50
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 21553152 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 21536768 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 21504000 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0x10d1c51/0x11a3000, compress 0x0/0x0/0x0, omap 0x12dd8, meta 0x2bbd228), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070609 data_alloc: 218103808 data_used: 8195
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 21504000 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 21504000 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe80000/0x0/0x4ffc00000, data 0x10d53cc/0x11aa000, compress 0x0/0x0/0x0, omap 0x1333c, meta 0x2bbccc4), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076731 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 137 handle_osd_map epochs [138,139], i have 137, src has [1,139]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.667154312s of 12.851060867s, submitted: 105
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080763 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe81000/0x0/0x4ffc00000, data 0x10d51fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x1333c, meta 0x2bbccc4), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 21405696 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 21372928 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe7d000/0x0/0x4ffc00000, data 0x10d891b/0x11ad000, compress 0x0/0x0/0x0, omap 0x13699, meta 0x2bbc967), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 21372928 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 21372928 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083105 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10da3e6/0x11b0000, compress 0x0/0x0/0x0, omap 0x13a2c, meta 0x2bbc5d4), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085879 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fbe77000/0x0/0x4ffc00000, data 0x10dbe81/0x11b3000, compress 0x0/0x0/0x0, omap 0x13d2f, meta 0x2bbc2d1), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085879 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.290771484s of 18.353887558s, submitted: 63
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe77000/0x0/0x4ffc00000, data 0x10dbe81/0x11b3000, compress 0x0/0x0/0x0, omap 0x13d2f, meta 0x2bbc2d1), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088653 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe74000/0x0/0x4ffc00000, data 0x10dd900/0x11b6000, compress 0x0/0x0/0x0, omap 0x14042, meta 0x2bbbfbe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe73000/0x0/0x4ffc00000, data 0x10dd99b/0x11b7000, compress 0x0/0x0/0x0, omap 0x14042, meta 0x2bbbfbe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79290368 unmapped: 20250624 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090345 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 20234240 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe73000/0x0/0x4ffc00000, data 0x10dd99b/0x11b7000, compress 0x0/0x0/0x0, omap 0x14042, meta 0x2bbbfbe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe73000/0x0/0x4ffc00000, data 0x10dd99b/0x11b7000, compress 0x0/0x0/0x0, omap 0x14042, meta 0x2bbbfbe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.632844925s of 10.640859604s, submitted: 13
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087933 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe76000/0x0/0x4ffc00000, data 0x10dd900/0x11b6000, compress 0x0/0x0/0x0, omap 0x14042, meta 0x2bbbfbe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbe76000/0x0/0x4ffc00000, data 0x10dd900/0x11b6000, compress 0x0/0x0/0x0, omap 0x14042, meta 0x2bbbfbe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091427 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbe71000/0x0/0x4ffc00000, data 0x10df505/0x11b9000, compress 0x0/0x0/0x0, omap 0x142cd, meta 0x2bbbd33), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094201 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbe6e000/0x0/0x4ffc00000, data 0x10e0f84/0x11bc000, compress 0x0/0x0/0x0, omap 0x145e0, meta 0x2bbba20), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbe6e000/0x0/0x4ffc00000, data 0x10e0f84/0x11bc000, compress 0x0/0x0/0x0, omap 0x145e0, meta 0x2bbba20), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.540208817s of 15.619994164s, submitted: 64
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095893 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 20217856 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbe6d000/0x0/0x4ffc00000, data 0x10e101f/0x11bd000, compress 0x0/0x0/0x0, omap 0x145e0, meta 0x2bbba20), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 20217856 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 20217856 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 20201472 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098413 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbe6d000/0x0/0x4ffc00000, data 0x10e10ba/0x11be000, compress 0x0/0x0/0x0, omap 0x145e0, meta 0x2bbba20), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbe6d000/0x0/0x4ffc00000, data 0x10e10ba/0x11be000, compress 0x0/0x0/0x0, omap 0x145e0, meta 0x2bbba20), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.385616302s of 10.393396378s, submitted: 4
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096977 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbe6d000/0x0/0x4ffc00000, data 0x10e101f/0x11bd000, compress 0x0/0x0/0x0, omap 0x145e0, meta 0x2bbba20), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 144 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097933 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbe6b000/0x0/0x4ffc00000, data 0x10e2b89/0x11bf000, compress 0x0/0x0/0x0, omap 0x1486b, meta 0x2bbb795), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbe6b000/0x0/0x4ffc00000, data 0x10e2b89/0x11bf000, compress 0x0/0x0/0x0, omap 0x1486b, meta 0x2bbb795), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097933 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.764005661s of 12.822526932s, submitted: 26
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe6b000/0x0/0x4ffc00000, data 0x10e2b89/0x11bf000, compress 0x0/0x0/0x0, omap 0x1486b, meta 0x2bbb795), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe68000/0x0/0x4ffc00000, data 0x10e4608/0x11c2000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100707 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe68000/0x0/0x4ffc00000, data 0x10e4608/0x11c2000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe68000/0x0/0x4ffc00000, data 0x10e4608/0x11c2000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe68000/0x0/0x4ffc00000, data 0x10e4608/0x11c2000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100707 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe68000/0x0/0x4ffc00000, data 0x10e4608/0x11c2000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.986886024s of 10.001235008s, submitted: 13
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 20160512 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 20160512 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 20144128 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103371 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 20144128 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe67000/0x0/0x4ffc00000, data 0x10e47d9/0x11c5000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 20144128 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 20111360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 20111360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe66000/0x0/0x4ffc00000, data 0x10e484e/0x11c6000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 20111360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108015 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 20111360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe65000/0x0/0x4ffc00000, data 0x10e48c2/0x11c7000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 19021824 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.170200348s of 10.202037811s, submitted: 10
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 18857984 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 18825216 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Got map version 12
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 18882560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110241 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 18874368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe65000/0x0/0x4ffc00000, data 0x10e49bd/0x11c7000, compress 0x0/0x0/0x0, omap 0x14ce9, meta 0x2bbb317), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Got map version 13
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe65000/0x0/0x4ffc00000, data 0x10e49bd/0x11c7000, compress 0x0/0x0/0x0, omap 0x14ce9, meta 0x2bbb317), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 18759680 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe64000/0x0/0x4ffc00000, data 0x10e4837/0x11c6000, compress 0x0/0x0/0x0, omap 0x14ce9, meta 0x2bbb317), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110929 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe66000/0x0/0x4ffc00000, data 0x10e4835/0x11c6000, compress 0x0/0x0/0x0, omap 0x14ce9, meta 0x2bbb317), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.980128288s of 10.007835388s, submitted: 15
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 18759680 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe66000/0x0/0x4ffc00000, data 0x10e4809/0x11c6000, compress 0x0/0x0/0x0, omap 0x14ce9, meta 0x2bbb317), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110753 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe65000/0x0/0x4ffc00000, data 0x10e4809/0x11c6000, compress 0x0/0x0/0x0, omap 0x14ce9, meta 0x2bbb317), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111583 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e6373/0x11c8000, compress 0x0/0x0/0x0, omap 0x15044, meta 0x2bbafbc), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.957118034s of 12.004303932s, submitted: 30
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 18726912 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113417 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 18710528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e63a1/0x11c8000, compress 0x0/0x0/0x0, omap 0x15044, meta 0x2bbafbc), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 18710528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 18710528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116895 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe5f000/0x0/0x4ffc00000, data 0x10e7d57/0x11ca000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.981225967s of 10.005500793s, submitted: 19
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115441 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e7d57/0x11ca000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e7d57/0x11ca000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e7d57/0x11ca000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e7d57/0x11ca000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115441 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe64000/0x0/0x4ffc00000, data 0x10e7c8c/0x11c8000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 18653184 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115969 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 18653184 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 18653184 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e7d55/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 18653184 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.155082703s of 13.166279793s, submitted: 6
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 18636800 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7d53/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 18636800 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115953 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 18636800 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7c8c/0x11c8000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115235 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7c8c/0x11c8000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115235 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7c8c/0x11c8000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7c8c/0x11c8000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread fragmentation_score=0.000140 took=0.000039s
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115235 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.768423080s of 17.776714325s, submitted: 3
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7d27/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115953 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7d27/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7d27/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e7dc2/0x11ca000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7d27/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115953 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7d27/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 148 handle_osd_map epochs [149,149], i have 149, src has [1,149]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.382137299s of 11.391574860s, submitted: 3
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 18604032 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 18513920 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 18513920 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122525 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 17383424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fbe41000/0x0/0x4ffc00000, data 0x1108ae3/0x11eb000, compress 0x0/0x0/0x0, omap 0x155c8, meta 0x2bbaa38), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 17113088 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 16924672 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 16924672 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fbde2000/0x0/0x4ffc00000, data 0x1167e51/0x124a000, compress 0x0/0x0/0x0, omap 0x155c8, meta 0x2bbaa38), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 149 ms_handle_reset con 0x560d7ff3b400 session 0x560d81c43c00
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 13844480 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132665 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Got map version 14
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 13787136 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fabff000/0x0/0x4ffc00000, data 0x11a8eee/0x128c000, compress 0x0/0x0/0x0, omap 0x155c8, meta 0x3d5aa38), peers [0,1] op hist [0,1])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 12722176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 12247040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fabc8000/0x0/0x4ffc00000, data 0x11e02bc/0x12c3000, compress 0x0/0x0/0x0, omap 0x155c8, meta 0x3d5aa38), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.793652534s of 11.003534317s, submitted: 271
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 12959744 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 88162304 unmapped: 11378688 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149699 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 10993664 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 10821632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 10821632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fab3a000/0x0/0x4ffc00000, data 0x126dcdd/0x1352000, compress 0x0/0x0/0x0, omap 0x158d9, meta 0x3d5a727), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 10747904 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 10485760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148625 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 10256384 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 89948160 unmapped: 9592832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4faa8c000/0x0/0x4ffc00000, data 0x131b204/0x1400000, compress 0x0/0x0/0x0, omap 0x158d9, meta 0x3d5a727), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Got map version 15
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 90005504 unmapped: 9535488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.966275215s of 10.171354294s, submitted: 105
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91488256 unmapped: 8052736 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4faa56000/0x0/0x4ffc00000, data 0x134ecc8/0x1434000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91250688 unmapped: 8290304 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154013 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91250688 unmapped: 8290304 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 8429568 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91381760 unmapped: 8159232 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4faa17000/0x0/0x4ffc00000, data 0x138f9e5/0x1475000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91013120 unmapped: 8527872 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa9f7000/0x0/0x4ffc00000, data 0x13afea0/0x1495000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91013120 unmapped: 8527872 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161625 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 8454144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91013120 unmapped: 8527872 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92217344 unmapped: 7323648 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.723700523s of 10.847046852s, submitted: 63
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92282880 unmapped: 7258112 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92282880 unmapped: 7258112 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa972000/0x0/0x4ffc00000, data 0x143454d/0x151a000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165339 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92348416 unmapped: 7192576 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa962000/0x0/0x4ffc00000, data 0x1445242/0x152a000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 6979584 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 6914048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 6914048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 8011776 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172933 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91496448 unmapped: 8044544 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 6963200 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8ba000/0x0/0x4ffc00000, data 0x14eb5a7/0x15d2000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 6750208 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 6750208 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.517700195s of 10.702951431s, submitted: 59
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 6799360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173367 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 6799360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8b9000/0x0/0x4ffc00000, data 0x14eb5aa/0x15d2000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 6799360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 6791168 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 6791168 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8ba000/0x0/0x4ffc00000, data 0x14eb5a8/0x15d2000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 6791168 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173719 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 6791168 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8b6000/0x0/0x4ffc00000, data 0x14ed112/0x15d4000, compress 0x0/0x0/0x0, omap 0x15cb0, meta 0x3d5a350), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 6782976 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8b6000/0x0/0x4ffc00000, data 0x14ed112/0x15d4000, compress 0x0/0x0/0x0, omap 0x15cb0, meta 0x3d5a350), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 6782976 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 6782976 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.976147652s of 10.041531563s, submitted: 38
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 6782976 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8b6000/0x0/0x4ffc00000, data 0x14ed078/0x15d3000, compress 0x0/0x0/0x0, omap 0x15cb0, meta 0x3d5a350), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181937 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 6766592 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 6766592 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92782592 unmapped: 6758400 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 6750208 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 6750208 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185829 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6725632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa8af000/0x0/0x4ffc00000, data 0x14f06ff/0x15d9000, compress 0x0/0x0/0x0, omap 0x162ef, meta 0x3d59d11), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa8b5000/0x0/0x4ffc00000, data 0x14f05c9/0x15d7000, compress 0x0/0x0/0x0, omap 0x162ef, meta 0x3d59d11), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6725632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa8b5000/0x0/0x4ffc00000, data 0x14f05c9/0x15d7000, compress 0x0/0x0/0x0, omap 0x162ef, meta 0x3d59d11), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6725632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa8b5000/0x0/0x4ffc00000, data 0x14f05c9/0x15d7000, compress 0x0/0x0/0x0, omap 0x162ef, meta 0x3d59d11), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6725632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6725632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183465 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6725632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.959350586s of 12.027016640s, submitted: 47
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92856320 unmapped: 6684672 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fa8b5000/0x0/0x4ffc00000, data 0x14f05c9/0x15d7000, compress 0x0/0x0/0x0, omap 0x162ef, meta 0x3d59d11), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 154 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92897280 unmapped: 6643712 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92897280 unmapped: 6643712 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92897280 unmapped: 6643712 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190197 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fa8ab000/0x0/0x4ffc00000, data 0x14f3cc9/0x15dd000, compress 0x0/0x0/0x0, omap 0x163de, meta 0x3d59c22), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92897280 unmapped: 6643712 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 6635520 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 6635520 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 6873088 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 6873088 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190721 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fa8ad000/0x0/0x4ffc00000, data 0x14f3d92/0x15de000, compress 0x0/0x0/0x0, omap 0x163de, meta 0x3d59c22), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 6864896 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.852005005s of 10.095981598s, submitted: 40
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92848128 unmapped: 6692864 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92856320 unmapped: 6684672 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Got map version 16
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192745 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa8ac000/0x0/0x4ffc00000, data 0x14f5768/0x15e0000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa8ac000/0x0/0x4ffc00000, data 0x14f5768/0x15e0000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192745 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa8ac000/0x0/0x4ffc00000, data 0x14f5768/0x15e0000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.775607109s of 10.854191780s, submitted: 21
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194453 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 6569984 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 6569984 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa8ab000/0x0/0x4ffc00000, data 0x14f582f/0x15e1000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0x14f736d/0x15e3000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197213 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0x14f736d/0x15e3000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0x14f736d/0x15e3000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0x14f736d/0x15e3000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0x14f736d/0x15e3000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197213 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.543810844s of 14.834462166s, submitted: 29
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 6553600 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x14f8dec/0x15e6000, compress 0x0/0x0/0x0, omap 0x1652d, meta 0x3d59ad3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92995584 unmapped: 6545408 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92995584 unmapped: 6545408 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202251 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92995584 unmapped: 6545408 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92995584 unmapped: 6545408 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92995584 unmapped: 6545408 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0x14fa9f1/0x15e9000, compress 0x0/0x0/0x0, omap 0x1652d, meta 0x3d59ad3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92995584 unmapped: 6545408 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 6537216 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202251 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 6537216 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 6537216 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0x14fa9f1/0x15e9000, compress 0x0/0x0/0x0, omap 0x1652d, meta 0x3d59ad3), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 159 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.380644798s of 10.438771248s, submitted: 36
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 6529024 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 6529024 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 6529024 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206253 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x14fc5d4/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207081 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89e000/0x0/0x4ffc00000, data 0x14fc5d2/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89e000/0x0/0x4ffc00000, data 0x14fc5d2/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205389 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0x14fc50b/0x15ed000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.297727585s of 16.323945999s, submitted: 20
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93028352 unmapped: 6512640 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208645 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93028352 unmapped: 6512640 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x14fc66f/0x15ef000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93028352 unmapped: 6512640 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x14fc66f/0x15ef000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208629 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x14fc66d/0x15ef000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x14fc66d/0x15ef000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89e000/0x0/0x4ffc00000, data 0x14fc5a6/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.305027008s of 10.329172134s, submitted: 10
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208039 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89e000/0x0/0x4ffc00000, data 0x14fc5a6/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89e000/0x0/0x4ffc00000, data 0x14fc5a6/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 6496256 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 6496256 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209013 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 6496256 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x14fc5d4/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.115866661s of 10.131819725s, submitted: 9
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209365 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0x14fc470/0x15ec000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89e000/0x0/0x4ffc00000, data 0x14fc538/0x15ed000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209365 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0x14fc536/0x15ed000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 6569984 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 6569984 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 6569984 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.196979523s of 10.256405830s, submitted: 81
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 161 ms_handle_reset con 0x560d82b7e400 session 0x560d828a8fc0
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 161 ms_handle_reset con 0x560d7f8a9800 session 0x560d837ebc00
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Got map version 17
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x14fe0a5/0x15ef000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212139 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x14ffbdf/0x15f3000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215633 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x14ffbdf/0x15f3000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.729774475s of 10.760393143s, submitted: 193
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213221 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa89a000/0x0/0x4ffc00000, data 0x14ffb44/0x15f2000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa89a000/0x0/0x4ffc00000, data 0x14ffb44/0x15f2000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213221 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa898000/0x0/0x4ffc00000, data 0x14ffc7a/0x15f4000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.998134613s of 10.008211136s, submitted: 6
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217579 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 5365760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa893000/0x0/0x4ffc00000, data 0x150187f/0x15f7000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 5365760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 5365760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 5365760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 5365760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219811 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 5365760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa893000/0x0/0x4ffc00000, data 0x150187f/0x15f7000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 5357568 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 5357568 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa891000/0x0/0x4ffc00000, data 0x1503263/0x15f9000, compress 0x0/0x0/0x0, omap 0x16702, meta 0x3d598fe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 5357568 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221405 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.603826523s of 10.692914009s, submitted: 38
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa892000/0x0/0x4ffc00000, data 0x15031c8/0x15f8000, compress 0x0/0x0/0x0, omap 0x16702, meta 0x3d598fe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa894000/0x0/0x4ffc00000, data 0x15031c8/0x15f8000, compress 0x0/0x0/0x0, omap 0x16702, meta 0x3d598fe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222377 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 5324800 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 5324800 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0x1506a6d/0x15ff000, compress 0x0/0x0/0x0, omap 0x16702, meta 0x3d598fe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228501 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 5324800 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0x1506a6d/0x15ff000, compress 0x0/0x0/0x0, omap 0x16702, meta 0x3d598fe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.033651352s of 11.112089157s, submitted: 61
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230685 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa889000/0x0/0x4ffc00000, data 0x150846d/0x1601000, compress 0x0/0x0/0x0, omap 0x16788, meta 0x3d59878), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa889000/0x0/0x4ffc00000, data 0x150846d/0x1601000, compress 0x0/0x0/0x0, omap 0x16788, meta 0x3d59878), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa889000/0x0/0x4ffc00000, data 0x150846d/0x1601000, compress 0x0/0x0/0x0, omap 0x16788, meta 0x3d59878), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230685 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.938399315s of 10.001417160s, submitted: 53
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x150bcaf/0x1607000, compress 0x0/0x0/0x0, omap 0x1680e, meta 0x3d597f2), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236697 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x150bcaf/0x1607000, compress 0x0/0x0/0x0, omap 0x1680e, meta 0x3d597f2), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x150bcaf/0x1607000, compress 0x0/0x0/0x0, omap 0x1680e, meta 0x3d597f2), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x150bcaf/0x1607000, compress 0x0/0x0/0x0, omap 0x1680e, meta 0x3d597f2), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236697 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 169 handle_osd_map epochs [170,171], i have 169, src has [1,171]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.968458176s of 10.001696587s, submitted: 32
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa87d000/0x0/0x4ffc00000, data 0x150f38f/0x160d000, compress 0x0/0x0/0x0, omap 0x16894, meta 0x3d5976c), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 5292032 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa87d000/0x0/0x4ffc00000, data 0x150f38f/0x160d000, compress 0x0/0x0/0x0, omap 0x16894, meta 0x3d5976c), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242213 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa87d000/0x0/0x4ffc00000, data 0x150f38f/0x160d000, compress 0x0/0x0/0x0, omap 0x16894, meta 0x3d5976c), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242213 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87d000/0x0/0x4ffc00000, data 0x150f38f/0x160d000, compress 0x0/0x0/0x0, omap 0x16894, meta 0x3d5976c), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8843 writes, 32K keys, 8843 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 8843 writes, 2113 syncs, 4.19 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3112 writes, 8422 keys, 3112 commit groups, 1.0 writes per commit group, ingest: 8.08 MB, 0.01 MB/s#012Interval WAL: 3112 writes, 1189 syncs, 2.62 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc ms_handle_reset ms_handle_reset con 0x560d82092400
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3695062931
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_configure stats_period=5
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 56.867332458s of 56.905132294s, submitted: 32
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246247 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Got map version 18
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa877000/0x0/0x4ffc00000, data 0x1512a33/0x1613000, compress 0x0/0x0/0x0, omap 0x199f1, meta 0x3d5660f), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Got map version 19
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247329 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 5275648 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa879000/0x0/0x4ffc00000, data 0x1512a33/0x1613000, compress 0x0/0x0/0x0, omap 0x199f1, meta 0x3d5660f), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 5275648 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.735460281s of 10.886501312s, submitted: 58
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246609 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250103 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250103 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250103 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.894744873s of 21.044736862s, submitted: 94
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252877 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 175 heartbeat osd_stat(store_statfs(0x4fa871000/0x0/0x4ffc00000, data 0x15160b7/0x1619000, compress 0x0/0x0/0x0, omap 0x19f8d, meta 0x3d56073), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 175 heartbeat osd_stat(store_statfs(0x4fa871000/0x0/0x4ffc00000, data 0x15160b7/0x1619000, compress 0x0/0x0/0x0, omap 0x19f8d, meta 0x3d56073), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252877 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3219406421' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 59.063335419s of 59.114078522s, submitted: 40
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 ms_handle_reset con 0x560d82a70000 session 0x560d80e7c1c0
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 5021696 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 5021696 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Got map version 20
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 4956160 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: do_command 'config diff' '{prefix=config diff}'
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: do_command 'config show' '{prefix=config show}'
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: do_command 'counter dump' '{prefix=counter dump}'
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: do_command 'counter schema' '{prefix=counter schema}'
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 4431872 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 4431872 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 95133696 unmapped: 4407296 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb  1 10:23:51 np0005604375 ceph-osd[88066]: do_command 'log dump' '{prefix=log dump}'
Feb  1 10:23:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  1 10:23:51 np0005604375 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  1 10:23:51 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} v 0)
Feb  1 10:23:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} : dispatch
Feb  1 10:23:51 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb  1 10:23:51 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831620940' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb  1 10:23:52 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14568 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:52 np0005604375 nova_compute[238794]: 2026-02-01 15:23:52.315 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:23:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb  1 10:23:52 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2019968640' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb  1 10:23:52 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14572 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:52 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:52 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  1 10:23:52 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3403618103' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb  1 10:23:52 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14576 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb  1 10:23:53 np0005604375 nova_compute[238794]: 2026-02-01 15:23:53.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:23:53 np0005604375 nova_compute[238794]: 2026-02-01 15:23:53.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:23:53 np0005604375 nova_compute[238794]: 2026-02-01 15:23:53.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  1 10:23:53 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14580 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  1 10:23:53 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb  1 10:23:53 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3009433075' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb  1 10:23:53 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14582 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  1 10:23:54 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb  1 10:23:54 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681521670' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb  1 10:23:54 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14586 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  1 10:23:54 np0005604375 nova_compute[238794]: 2026-02-01 15:23:54.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  1 10:23:54 np0005604375 nova_compute[238794]: 2026-02-01 15:23:54.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  1 10:23:54 np0005604375 nova_compute[238794]: 2026-02-01 15:23:54.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  1 10:23:54 np0005604375 nova_compute[238794]: 2026-02-01 15:23:54.337 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  1 10:23:54 np0005604375 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb  1 10:23:54 np0005604375 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14590 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  1 10:23:54 np0005604375 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  1 10:23:54 np0005604375 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:23:54.726+0000 7f8298063640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  1 10:23:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Feb  1 10:23:55 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472477405' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb  1 10:23:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb  1 10:23:55 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/998708105' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fce8f000/0x0/0x4ffc00000, data 0xfffd0/0x19b000, compress 0x0/0x0/0x0, omap 0xfa0a, meta 0x2bc05f6), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1384448 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 1368064 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 98 heartbeat osd_stat(store_statfs(0x4fce8e000/0x0/0x4ffc00000, data 0x101b6c/0x19e000, compress 0x0/0x0/0x0, omap 0xfc88, meta 0x2bc0378), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 1351680 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 738022 data_alloc: 218103808 data_used: 6261
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 1351680 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1343488 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 98 handle_osd_map epochs [99,100], i have 98, src has [1,100]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 1335296 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fce83000/0x0/0x4ffc00000, data 0x106d25/0x1a7000, compress 0x0/0x0/0x0, omap 0x1018a, meta 0x2bbfe76), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 100 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.382933617s of 10.406901360s, submitted: 22
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 1335296 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fce83000/0x0/0x4ffc00000, data 0x106d25/0x1a7000, compress 0x0/0x0/0x0, omap 0x1018a, meta 0x2bbfe76), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1286144 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 751529 data_alloc: 218103808 data_used: 6261
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.d scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.d scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1269760 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1269760 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1261568 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fce78000/0x0/0x4ffc00000, data 0x10bd95/0x1b0000, compress 0x0/0x0/0x0, omap 0x1091c, meta 0x2bbf6e4), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1261568 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1261568 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 758405 data_alloc: 218103808 data_used: 6261
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1253376 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.b scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.b scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 103 handle_osd_map epochs [103,104], i have 104, src has [1,104]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1253376 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1236992 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fce77000/0x0/0x4ffc00000, data 0x10d931/0x1b3000, compress 0x0/0x0/0x0, omap 0x10ba6, meta 0x2bbf45a), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1228800 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.441488266s of 10.474997520s, submitted: 19
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1220608 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777561 data_alloc: 218103808 data_used: 6261
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1212416 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1204224 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 107 handle_osd_map epochs [108,109], i have 107, src has [1,109]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1187840 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0x115e69/0x1c2000, compress 0x0/0x0/0x0, omap 0x115e2, meta 0x2bbea1e), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 2236416 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0x115e69/0x1c2000, compress 0x0/0x0/0x0, omap 0x115e2, meta 0x2bbea1e), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 2170880 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 791269 data_alloc: 218103808 data_used: 6538
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 2154496 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 2203648 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 2195456 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 2449408 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.084319115s of 10.127679825s, submitted: 20
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 110 handle_osd_map epochs [111,112], i have 110, src has [1,112]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 2441216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 808779 data_alloc: 218103808 data_used: 6538
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 112 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0x117a05/0x1c5000, compress 0x0/0x0/0x0, omap 0x11876, meta 0x2bbe78a), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 112 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0x11b022/0x1cb000, compress 0x0/0x0/0x0, omap 0x11b0c, meta 0x2bbe4f4), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 2539520 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 2539520 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 2531328 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 2473984 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x11ff3d/0x1d4000, compress 0x0/0x0/0x0, omap 0x12257, meta 0x2bbdda9), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 2465792 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823574 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 2465792 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f(unlocked)] enter Initial
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=0 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000127 0 0.000000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=0 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000030 1 0.000149
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000240 1 0.000158
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000049 0 0.000000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000325 0 0.000000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 2457600 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.011667 2 0.000101
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.012066 0 0.000000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.012157 0 0.000000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000133 1 0.000220
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000028 0 0.000000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce53000/0x0/0x4ffc00000, data 0x121ad9/0x1d7000, compress 0x0/0x0/0x0, omap 0x124f4, meta 0x2bbdb0c), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 2449408 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 2449408 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.984783173s of 10.033134460s, submitted: 23
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=38'483 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.607841 5 0.000098
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=38'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=38'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 38'140 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.004135 4 0.000239
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 38'140 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 38'140 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000110 1 0.000073
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 38'140 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.039386 1 0.000049
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.376952 1 0.000082
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 active+remapped mbc={}] exit Started/ReplicaActive 0.420769 0 0.000000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 active+remapped mbc={}] exit Started 2.028705 0 0.000000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 active+remapped mbc={}] enter Reset
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000157 1 0.000220
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] enter Started
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] enter Start
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004526 2 0.000063
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001163 2 0.000128
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 2359296 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850143 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 119 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003746 2 0.000104
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009550 0 0.000000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=119/66 les/c/f=120/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003072 4 0.000177
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=119/66 les/c/f=120/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=119/66 les/c/f=120/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000027 0 0.000000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=119/66 les/c/f=120/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 2351104 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 2351104 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce42000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 2342912 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 2326528 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 2318336 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858207 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 2318336 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 2318336 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce42000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 2293760 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 2277376 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 2269184 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 861245 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.880071640s of 10.929857254s, submitted: 28
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 2252800 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 2252800 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 2244608 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 2244608 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 2244608 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 866071 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 2236416 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 2236416 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 2236416 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 2228224 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 2228224 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 868482 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.f scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.737944603s of 10.750078201s, submitted: 6
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.f scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 2220032 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 2220032 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 2203648 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 2203648 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 2195456 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875717 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 2195456 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 2195456 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 2187264 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 2187264 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875717 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 2170880 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 2170880 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 2162688 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 2162688 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.895403862s of 12.907132149s, submitted: 6
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 2154496 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.f scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.f scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880539 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 2138112 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.a scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.a scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 2138112 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 2129920 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 2129920 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 2129920 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 882950 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 2121728 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 2121728 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.d scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.d scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 2113536 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 2105344 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 2097152 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885361 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 2097152 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 2088960 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.042222023s of 13.057113647s, submitted: 8
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 2088960 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 2088960 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.c scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.c scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 2064384 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890185 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 2056192 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 2056192 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 2048000 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 2048000 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 2031616 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.b scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.b scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895009 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 2023424 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 2015232 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 2015232 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.876013756s of 10.889564514s, submitted: 8
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 2015232 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1998848 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897420 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1998848 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1990656 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1990656 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1982464 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1982464 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899831 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1982464 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 1974272 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.d scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.d scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 1966080 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 1966080 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 1949696 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.887371063s of 11.902037621s, submitted: 6
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904653 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 1949696 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 1941504 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 1941504 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 1933312 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 1933312 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904653 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 1925120 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 1916928 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 1916928 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 1908736 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.f scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.f scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 1908736 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907064 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 1908736 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 1900544 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 1900544 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 1892352 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 1892352 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907064 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 1884160 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 1884160 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.882419586s of 16.891012192s, submitted: 4
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 1875968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 1875968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 1875968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909475 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 1875968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 1875968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 1867776 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 1867776 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 1867776 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911886 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 1859584 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 1859584 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 1851392 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 1851392 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 1843200 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914297 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 1835008 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.116877556s of 14.127217293s, submitted: 6
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 1826816 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 1818624 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 1810432 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 1794048 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921538 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 1794048 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 1794048 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 1785856 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 1785856 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 1777664 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923951 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 1769472 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 1769472 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 1761280 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 1761280 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 1753088 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.980370522s of 13.995172501s, submitted: 8
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926364 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 1736704 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 1720320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 1720320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 1720320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 1712128 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931188 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 1712128 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 1703936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 1703936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 1703936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 1695744 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933601 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 1695744 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.049659729s of 11.065093994s, submitted: 8
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 1687552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 1687552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 1687552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 1679360 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938429 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 1679360 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 1671168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 1671168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 1662976 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 1654784 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943255 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 1646592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 1638400 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 1638400 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.760448456s of 12.774148941s, submitted: 8
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 1630208 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 1630208 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945668 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 1622016 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 1613824 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 1613824 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 1605632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 1605632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952909 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 1597440 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 1581056 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 1572864 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 1572864 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 1572864 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.d scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.959541321s of 11.087907791s, submitted: 10
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.d scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957731 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 1564672 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 1564672 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 1556480 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.e scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.e scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 1556480 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 1548288 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964964 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 1548288 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.c scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.c scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 1548288 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.b scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.b scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 1540096 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 1523712 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 1523712 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972199 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 1507328 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.094460487s of 11.182248116s, submitted: 14
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 1499136 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 1499136 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 1490944 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 1490944 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977025 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 1482752 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 1482752 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 1474560 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 1466368 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 1466368 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981849 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 1458176 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 1449984 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 1449984 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.a scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.739817619s of 12.821089745s, submitted: 10
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.a scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 1425408 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 1417216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989082 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 1417216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 1417216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1400832 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1400832 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 1384448 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 1376256 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 1376256 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 1368064 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 1368064 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 1359872 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 1359872 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 1359872 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1351680 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1351680 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1351680 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 1343488 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 1343488 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 1335296 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 1335296 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 1335296 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 1318912 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 1318912 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 1310720 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 1310720 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 1302528 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 1302528 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 1302528 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 1294336 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 1294336 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 1286144 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 1286144 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 1277952 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 1277952 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 1277952 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 1261568 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 1261568 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 1253376 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 1253376 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 1245184 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 1236992 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 1236992 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1228800 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1228800 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1228800 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1212416 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 1204224 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 1196032 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 1196032 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 1187840 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 1187840 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 1187840 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1179648 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1179648 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1171456 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1171456 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1163264 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1163264 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1163264 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1155072 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 1146880 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 1146880 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1138688 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1138688 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1130496 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1130496 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1130496 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 1122304 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 1122304 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1114112 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1114112 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1097728 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1097728 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1097728 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1089536 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1089536 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1081344 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1081344 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1081344 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 1073152 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 1073152 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 1064960 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 1064960 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 1064960 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 1056768 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 1056768 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 1048576 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 1048576 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1040384 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1040384 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 1032192 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1024000 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1024000 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1015808 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 1007616 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 1007616 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 999424 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 999424 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 999424 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 991232 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 991232 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 983040 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 983040 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 974848 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 974848 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 974848 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 966656 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 966656 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 958464 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 958464 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 950272 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 950272 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 950272 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 942080 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 942080 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 925696 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 925696 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 917504 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 917504 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 917504 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 909312 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 909312 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 901120 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 901120 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 901120 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 892928 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 892928 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 884736 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 884736 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 884736 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 876544 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 876544 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 876544 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 868352 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 868352 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 851968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 851968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 843776 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 843776 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 843776 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 835584 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 835584 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 827392 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 827392 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 819200 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 819200 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 819200 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 811008 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 811008 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 802816 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 802816 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 794624 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 794624 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 794624 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 786432 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 786432 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 786432 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 778240 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 778240 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 770048 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 761856 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 753664 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 753664 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 745472 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 745472 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 745472 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 745472 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 737280 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 737280 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 720896 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 720896 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 712704 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 712704 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 704512 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 704512 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 688128 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 688128 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 688128 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 679936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 679936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 671744 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 671744 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 663552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 663552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 663552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 655360 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 655360 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 655360 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 647168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 647168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 638976 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 638976 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 630784 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 630784 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 630784 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 622592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 622592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 622592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 614400 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 614400 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 606208 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 606208 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 598016 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 598016 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 598016 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 589824 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 589824 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 581632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 581632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 581632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 589824 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 589824 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 581632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 581632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 573440 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 573440 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 573440 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6923 writes, 28K keys, 6923 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6923 writes, 1318 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6923 writes, 28K keys, 6923 commit groups, 1.0 writes per commit group, ingest: 19.77 MB, 0.03 MB/s#012Interval WAL: 6923 writes, 1318 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 507904 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 507904 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 499712 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 499712 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 499712 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 491520 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 491520 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 483328 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 466944 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 466944 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 466944 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 458752 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 458752 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 450560 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 442368 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 434176 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 434176 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 425984 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 425984 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 425984 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 417792 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 417792 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 417792 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 409600 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 409600 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 393216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 393216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 393216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 385024 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 385024 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 376832 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 376832 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 376832 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 368640 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 368640 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 368640 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 360448 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 360448 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 352256 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 352256 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 344064 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 344064 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 344064 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 335872 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 335872 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 327680 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 327680 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 327680 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 319488 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 319488 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 319488 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 311296 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 277.726654053s of 277.740570068s, submitted: 8
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1359872 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 434176 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 434176 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 425984 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 425984 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 417792 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 417792 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 417792 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 409600 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 409600 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 401408 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 393216 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 393216 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 385024 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 385024 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 376832 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 376832 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 376832 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 368640 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 368640 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 360448 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 360448 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 360448 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 352256 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 352256 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 335872 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 335872 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 327680 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 327680 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 327680 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 319488 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 319488 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 311296 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 311296 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 303104 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2811723650' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 303104 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 303104 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 294912 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 294912 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 286720 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 286720 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 278528 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 278528 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 278528 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 270336 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 270336 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 253952 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 253952 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: mgrc ms_handle_reset ms_handle_reset con 0x55a03c608000
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3695062931
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: mgrc handle_mgr_configure stats_period=5
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 ms_handle_reset con 0x55a03c609800 session 0x55a03d0eafc0
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 ms_handle_reset con 0x55a03d146400 session 0x55a03d0eac40
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.872650146s of 300.119934082s, submitted: 90
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb  1 10:23:55 np0005604375 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
